Sample records for quantitative reliability evaluation

  1. [Reconsidering evaluation criteria regarding health care research: toward an integrative framework of quantitative and qualitative criteria].

    PubMed

    Miyata, Hiroaki; Kai, Ichiro

    2006-05-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confused and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. It is therefore very important to reconsider evaluation criteria regarding rigor in social science. As Lincoln & Guba have already compared quantitative paradigms (validity, reliability, neutrality, generalizability) with qualitative paradigms (credibility, dependability, confirmability, transferability), we have discuss use of evaluation criteria based on pragmatic perspective. Validity/Credibility is the paradigm concerned to observational framework, while Reliability/Dependability refer to the range of stability in observations, Neutrality/Confirmability reflect influences between observers and subjects, Generalizability/Transferability have epistemological difference in the way findings are applied. Qualitative studies, however, does not always chose the qualitative paradigms. If we assume the stability to some extent, it is better to use the quantitative paradigm (reliability). Moreover as a quantitative study can not always guarantee a perfect observational framework, with stability in all phases of observations, it is useful to use qualitative paradigms to enhance the rigor in the study.

  2. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  3. Evaluation of patients with painful total hip arthroplasty using combined single photon emission tomography and conventional computerized tomography (SPECT/CT) - a comparison of semi-quantitative versus 3D volumetric quantitative measurements.

    PubMed

    Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T

    2017-05-08

    It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral, distal femoral, acetabular) were identified.

  4. Reliability and Validity of the Professional Counseling Performance Evaluation

    ERIC Educational Resources Information Center

    Shepherd, J. Brad; Britton, Paula J.; Kress, Victoria E.

    2008-01-01

    The definition and measurement of counsellor trainee competency is an issue that has received increased attention yet lacks quantitative study. This research evaluates item responses, scale reliability and intercorrelations, interrater agreement, and criterion-related validity of the Professional Performance Fitness Evaluation/Professional…

  5. Quantitative Muscle Ultrasonography in Carpal Tunnel Syndrome.

    PubMed

    Lee, Hyewon; Jee, Sungju; Park, Soo Ho; Ahn, Seung-Chan; Im, Juneho; Sohn, Min Kyun

    2016-12-01

    To assess the reliability of quantitative muscle ultrasonography (US) in healthy subjects and to evaluate the correlation between quantitative muscle US findings and electrodiagnostic study results in patients with carpal tunnel syndrome (CTS). The clinical significance of quantitative muscle US in CTS was also assessed. Twenty patients with CTS and 20 age-matched healthy volunteers were recruited. All control and CTS subjects underwent a bilateral median and ulnar nerve conduction study (NCS) and quantitative muscle US. Transverse US images of the abductor pollicis brevis (APB) and abductor digiti minimi (ADM) were obtained to measure muscle cross-sectional area (CSA), thickness, and echo intensity (EI). EI was determined using computer-assisted, grayscale analysis. Inter-rater and intra-rater reliability for quantitative muscle US in control subjects, and differences in muscle thickness, CSA, and EI between the CTS patient and control groups were analyzed. Relationships between quantitative US parameters and electrodiagnostic study results were evaluated. Quantitative muscle US had high inter-rater and intra-rater reliability in the control group. Muscle thickness and CSA were significantly decreased, and EI was significantly increased in the APB of the CTS group (all p<0.05). EI demonstrated a significant positive correlation with latency of the median motor and sensory NCS in CTS patients (p<0.05). These findings suggest that quantitative muscle US parameters may be useful for detecting muscle changes in CTS. Further study involving patients with other neuromuscular diseases is needed to evaluate peripheral muscle change using quantitative muscle US.

  6. Identification and evaluation of reliable reference genes for quantitative real-time PCR analysis in tea plant (Camellia sinensis (L.) O. Kuntze)

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time polymerase chain reaction (qRT-PCR) is a commonly used technique for measuring gene expression levels due to its simplicity, specificity, and sensitivity. Reliable reference selection for the accurate quantification of gene expression under various experimental conditions is a...

  7. Reproducibility of sonographic measurement of thickness and echogenicity of the plantar fascia.

    PubMed

    Cheng, Ju-Wen; Tsai, Wen-Chung; Yu, Tung-Yang; Huang, Kuo-Yao

    2012-01-01

    To evaluate the intra- and interrater reliability of ultrasonographic measurements of the thickness and echogenicity of the plantar fascia. Eleven patients (20 feet), who complained of inferior heel pain, and 26 volunteers (52 feet) were enrolled. Two sonographers independently imaged the plantar fascia in both longitudinal and transverse planes. Volunteers were assessed twice to evaluate intrarater reliability. Quantitative evaluation of the echogenicity of the plantar fascia was performed by measuring the mean gray level of the region of interest using Digital Imaging and Communications in Medicine viewer software. Sonographic evaluation of the thickness of the plantar fascia showed high reliability. Sonographic evaluations of the presence or absence of hypoechoic change in the plantar fascia showed surprisingly low agreement. The reliability of gray-scale evaluations appears to be much better than subjective judgments in the evaluation of echogenicity. Transverse scanning did not show any advantage in sonographic evaluation of the plantar fascia. The reliability of sonographic examination of the thickness of the plantar fascia is high. Mean gray-level analysis of quantitative sonography can be used for the evaluation of echogenicity, which could reduce discrepancies in the interpretation of echogenicity by different sonographers. Longitudinal instead of transverse scanning is recommended for imaging the plantar fascia. Copyright © 2011 Wiley Periodicals, Inc.

  8. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    NASA Astrophysics Data System (ADS)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  9. A study on reliability of power customer in distribution network

    NASA Astrophysics Data System (ADS)

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin

    2017-05-01

    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  10. Evaluating the Level of Degree Programmes in Higher Education: The Case of Nursing

    ERIC Educational Resources Information Center

    Rexwinkel, Trudy; Haenen, Jacques; Pilot, Albert

    2013-01-01

    The European Quality Assurance system demands that the degree programme level is represented in terms of quantitative outcomes to be valid and reliable. To meet this need the Educational Level Evaluator (ELE) was devised. This conceptually designed procedure with instrumentation aiming to evaluate the level of a degree validly and reliably still…

  11. 40 CFR 795.225 - Dermal pharmacokinetics of DGBE and DGBA.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... this section because they will facilitate the work and improve the reliability of quantitative... for this purpose. (ii) Biotransformation after dermal dosing. Appropriate qualitative and quantitative... tabular form. (2) Evaluation of results. All observed results, quantitative or incidental, shall be...

  12. A Targeted LC-MS/MS Method for the Simultaneous Detection and Quantitation of Egg, Milk, and Peanut Allergens in Sugar Cookies.

    PubMed

    Boo, Chelsea C; Parker, Christine H; Jackson, Lauren S

    2018-01-01

    Food allergy is a growing public health concern, with many individuals reporting allergies to multiple food sources. Compliance with food labeling regulations and prevention of inadvertent cross-contact in manufacturing requires the use of reliable methods for the detection and quantitation of allergens in processed foods. In this work, a novel liquid chromatography-tandem mass spectrometry multiple-reaction monitoring method for multiallergen detection and quantitation of egg, milk, and peanut was developed and evaluated in an allergen-incurred baked sugar cookie matrix. A systematic evaluation of method parameters, including sample extraction, concentration, and digestion, were optimized for candidate allergen peptide markers. The optimized method enabled the reliable detection and quantitation of egg, milk, and peanut allergens in sugar cookies, with allergen concentrations as low as 5 ppm allergen-incurred ingredient.

  13. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  14. Integrated Approach To Design And Analysis Of Systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1993-01-01

    Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.

  15. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  16. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  17. The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Latimer, John A.

    2009-01-01

    This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.

  18. Further assessment of a method to estimate reliability and validity of qualitative research findings.

    PubMed

    Hinds, P S; Scandrett-Hibden, S; McAulay, L S

    1990-04-01

    The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.

  19. Noninvasive identification of the total peripheral resistance baroreflex

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ramakrishna; Toska, Karin; Cohen, Richard J.

    2003-01-01

    We propose two identification algorithms for quantitating the total peripheral resistance (TPR) baroreflex, an important contributor to short-term arterial blood pressure (ABP) regulation. Each algorithm analyzes beat-to-beat fluctuations in ABP and cardiac output, which may both be obtained noninvasively in humans. For a theoretical evaluation, we applied both algorithms to a realistic cardiovascular model. The results contrasted with only one of the algorithms proving to be reliable. This algorithm was able to track changes in the static gains of both the arterial and cardiopulmonary TPR baroreflex. We then applied both algorithms to a preliminary set of human data and obtained contrasting results much like those obtained from the cardiovascular model, thereby making the theoretical evaluation results more meaningful. This study suggests that, with experimental testing, the reliable identification algorithm may provide a powerful, noninvasive means for quantitating the TPR baroreflex. This study also provides an example of the role that models can play in the development and initial evaluation of algorithms aimed at quantitating important physiological mechanisms.

  20. Evaluation of airway protection: Quantitative timing measures versus penetration/aspiration score.

    PubMed

    Kendall, Katherine A

    2017-10-01

    Quantitative measures of swallowing function may improve the reliability and accuracy of modified barium swallow (MBS) study interpretation. Quantitative study analysis has not been widely instituted, however, secondary to concerns about the time required to make measures and a lack of research demonstrating impact on MBS interpretation. This study compares the accuracy of the penetration/aspiration (PEN/ASP) scale (an observational visual-perceptual assessment tool) to quantitative measures of airway closure timing relative to the arrival of the bolus at the upper esophageal sphincter in identifying a failure of airway protection during deglutition. Retrospective review of clinical swallowing data from a university-based outpatient clinic. Swallowing data from 426 patients were reviewed. Patients with normal PEN/ASP scores were identified, and the results of quantitative airway closure timing measures for three liquid bolus sizes were evaluated. The incidence of significant airway closure delay with and without a normal PEN/ASP score was determined. Inter-rater reliability for the quantitative measures was calculated. In patients with a normal PEN/ASP score, 33% demonstrated a delay in airway closure on at least one swallow during the MBS study. There was no correlation between PEN/ASP score and airway closure delay. Inter-rater reliability for the quantitative measure of airway closure timing was nearly perfect (intraclass correlation coefficient = 0.973). The use of quantitative measures of swallowing function, in conjunction with traditional visual perceptual methods of MBS study interpretation, improves the identification of airway closure delay, and hence, potential aspiration risk, even when no penetration or aspiration is apparent on the MBS study. 4. Laryngoscope, 127:2314-2318, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  1. Test-retest reliability of quantitative sensory testing for mechanical somatosensory and pain modulation assessment of masticatory structures.

    PubMed

    Costa, Y M; Morita-Neto, O; de Araújo-Júnior, E N S; Sampaio, F A; Conti, P C R; Bonjardim, L R

    2017-03-01

    Assessing the reliability of medical measurements is a crucial step towards the elaboration of an applicable clinical instrument. There are few studies that evaluate the reliability of somatosensory assessment and pain modulation of masticatory structures. This study estimated the test-retest reliability, that is over time, of the mechanical somatosensory assessment of anterior temporalis, masseter and temporomandibular joint (TMJ) and the conditioned pain modulation (CPM) using the anterior temporalis as the test site. Twenty healthy women were evaluated in two sessions (1 week apart) by the same examiner. Mechanical detection threshold (MDT), mechanical pain threshold (MPT), wind-up ratio (WUR) and pressure pain threshold (PPT) were assessed on the skin overlying the anterior temporalis, masseter and TMJ of the dominant side. CPM was tested by comparing PPT before and during the hand immersion in a hot water bath. anova and intra-class correlation coefficients (ICCs) were applied to the data (α = 5%). The overall ICCs showed acceptable values for the test-retest reliability of mechanical somatosensory assessment of masticatory structures. The ICC values of 75% of all quantitative sensory measurements were considered fair to excellent (fair = 8·4%, good = 33·3% and excellent = 33·3%). However, the CPM paradigm presented poor reliability (ICC = 0·25). The mechanical somatosensory assessment of the masticatory structures, but not the proposed CPM protocol, can be considered sufficiently reliable over time to evaluate the trigeminal sensory function. © 2016 John Wiley & Sons Ltd.

  2. 76 FR 37620 - Risk-Based Capital Standards: Advanced Capital Adequacy Framework-Basel II; Establishment of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... systems. E. Quantitative Methods for Comparing Capital Frameworks The NPR sought comment on how the... industry while assessing levels of capital. This commenter points out maintaining reliable comparative data over time could make quantitative methods for this purpose difficult. For example, evaluating asset...

  3. Quantitative nondestructive evaluation: Requirements for tomorrow's reliability

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.

    1991-01-01

    Quantitative Nondestructive Evaluation (QNDE) is the technology of measurement, analysis, and prediction of the state of material/structural systems for safety, reliability, and mission assurance. QNDE has impact on everyday life from the cars we drive, the planes we fly, the buildings we work or live in, literally to the infrastructure of our world. Here, researchers highlight some of the new sciences and technologies that are part of a safer, cost effective tomorrow. Specific technologies that are discussed are thermal QNDE of aircraft structural integrity, ultrasonic QNDE for materials characterization, and technology spinoffs from aerospace to the medical sector. In each case, examples are given of how new requirements result in enabling measurement technologies, which in turn change the boundaries of design/practice.

  4. Polymer on Top: Current Limits and Future Perspectives of Quantitatively Evaluating Surface Grafting.

    PubMed

    Michalek, Lukas; Barner, Leonie; Barner-Kowollik, Christopher

    2018-03-07

    Well-defined polymer strands covalently tethered onto solid substrates determine the properties of the resulting functional interface. Herein, the current approaches to determine quantitative grafting densities are assessed. Based on a brief introduction into the key theories describing polymer brush regimes, a user's guide is provided to estimating maximum chain coverage and-importantly-examine the most frequently employed approaches for determining grafting densities, i.e., dry thickness measurements, gravimetric assessment, and swelling experiments. An estimation of the reliability of these determination methods is provided via carefully evaluating their assumptions and assessing the stability of the underpinning equations. A practical access guide for comparatively and quantitatively evaluating the reliability of a given approach is thus provided, enabling the field to critically judge experimentally determined grafting densities and to avoid the reporting of grafting densities that fall outside the physically realistic parameter space. The assessment is concluded with a perspective on the development of advanced approaches for determination of grafting density, in particular, on single-chain methodologies. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. The use and reliability of SymNose for quantitative measurement of the nose and lip in unilateral cleft lip and palate patients.

    PubMed

    Mosmuller, David; Tan, Robin; Mulder, Frans; Bachour, Yara; de Vet, Henrica; Don Griot, Peter

    2016-10-01

    It is essential to have a reliable assessment method in order to compare the results of cleft lip and palate surgery. In this study the computer-based program SymNose, a method for quantitative assessment of the nose and lip, will be assessed on usability and reliability. The symmetry of the nose and lip was measured twice in 50 six-year-old complete and incomplete unilateral cleft lip and palate patients by four observers. For the frontal view the asymmetry level of the nose and upper lip were evaluated and for the basal view the asymmetry level of the nose and nostrils were evaluated. A mean inter-observer reliability when tracing each image once or twice was 0.70 and 0.75, respectively. Tracing the photographs with 2 observers and 4 observers gave a mean inter-observer score of 0.86 and 0.92, respectively. The mean intra-observer reliability varied between 0.80 and 0.84. SymNose is a practical and reliable tool for the retrospective assessment of large caseloads of 2D photographs of cleft patients for research purposes. Moderate to high single inter-observer reliability was found. For future research with SymNose reliable outcomes can be achieved by using the average outcomes of single tracings of two observers. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    ERIC Educational Resources Information Center

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  7. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  8. Objectivity and reliability in qualitative analysis: realist, contextualist and radical constructionist epistemologies.

    PubMed

    Madill, A; Jordan, A; Shirley, C

    2000-02-01

    The effect of the individual analyst on research findings can create a credibility problem for qualitative approaches from the perspective of evaluative criteria utilized in quantitative psychology. This paper explicates the ways in which objectivity and reliability are understood in qualitative analysis conducted from within three distinct epistemological frameworks: realism, contextual constructionism, and radical constructionism. It is argued that quality criteria utilized in quantitative psychology are appropriate to the evaluation of qualitative analysis only to the extent that it is conducted within a naive or scientific realist framework. The discussion is illustrated with reference to the comparison of two independent grounded theory analyses of identical material. An implication of this illustration is to identify the potential to develop a radical constructionist strand of grounded theory.

  9. Reliability techniques in the petroleum industry

    NASA Technical Reports Server (NTRS)

    Williams, H. L.

    1971-01-01

    Quantitative reliability evaluation methods used in the Apollo Spacecraft Program are translated into petroleum industry requirements with emphasis on offsetting reliability demonstration costs and limited production runs. Described are the qualitative disciplines applicable, the definitions and criteria that accompany the disciplines, and the generic application of these disciplines to the chemical industry. The disciplines are then translated into proposed definitions and criteria for the industry, into a base-line reliability plan that includes these disciplines, and into application notes to aid in adapting the base-line plan to a specific operation.

  10. [Reliability theory based on quality risk network analysis for Chinese medicine injection].

    PubMed

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.

  11. Reliability of Semi-Automated Segmentations in Glioblastoma.

    PubMed

    Huber, T; Alber, G; Bette, S; Boeckh-Behrens, T; Gempt, J; Ringel, F; Alberts, E; Zimmer, C; Bauer, J S

    2017-06-01

    In glioblastoma, quantitative volumetric measurements of contrast-enhancing or fluid-attenuated inversion recovery (FLAIR) hyperintense tumor compartments are needed for an objective assessment of therapy response. The aim of this study was to evaluate the reliability of a semi-automated, region-growing segmentation tool for determining tumor volume in patients with glioblastoma among different users of the software. A total of 320 segmentations of tumor-associated FLAIR changes and contrast-enhancing tumor tissue were performed by different raters (neuroradiologists, medical students, and volunteers). All patients underwent high-resolution magnetic resonance imaging including a 3D-FLAIR and a 3D-MPRage sequence. Segmentations were done using a semi-automated, region-growing segmentation tool. Intra- and inter-rater-reliability were addressed by intra-class-correlation (ICC). Root-mean-square error (RMSE) was used to determine the precision error. Dice score was calculated to measure the overlap between segmentations. Semi-automated segmentation showed a high ICC (> 0.985) for all groups indicating an excellent intra- and inter-rater-reliability. Significant smaller precision errors and higher Dice scores were observed for FLAIR segmentations compared with segmentations of contrast-enhancement. Single rater segmentations showed the lowest RMSE for FLAIR of 3.3 % (MPRage: 8.2 %). Both, single raters and neuroradiologists had the lowest precision error for longitudinal evaluation of FLAIR changes. Semi-automated volumetry of glioblastoma was reliably performed by all groups of raters, even without neuroradiologic expertise. Interestingly, segmentations of tumor-associated FLAIR changes were more reliable than segmentations of contrast enhancement. In longitudinal evaluations, an experienced rater can detect progressive FLAIR changes of less than 15 % reliably in a quantitative way which could help to detect progressive disease earlier.

  12. Characterization and quantitation of polyolefin microplastics in personal-care products using high-temperature gel-permeation chromatography.

    PubMed

    Hintersteiner, Ingrid; Himmelsbach, Markus; Buchberger, Wolfgang W

    2015-02-01

    In recent years, the development of reliable methods for the quantitation of microplastics in different samples, including evaluating the particles' adverse effects in the marine environment, has become a great concern. Because polyolefins are the most prevalent type of polymer in personal-care products containing microplastics, this study presents a novel approach for their quantitation. The method is suitable for aqueous and hydrocarbon-based products, and includes a rapid sample clean-up involving twofold density separation and a subsequent quantitation with high-temperature gel-permeation chromatography. In contrast with previous procedures, both errors caused by weighing after insufficient separation of plastics and matrix and time-consuming visual sorting are avoided. In addition to reliable quantitative results, in this investigation a comprehensive characterization of the polymer particles isolated from the product matrix, covering size, shape, molecular weight distribution and stabilization, is provided. Results for seven different personal-care products are presented. Recoveries of this method were in the range of 92-96 %.

  13. Reliability of quantitative EEG (qEEG) measures and LORETA current source density at 30 days.

    PubMed

    Cannon, Rex L; Baldwin, Debora R; Shaw, Tiffany L; Diloreto, Dominic J; Phillips, Sherman M; Scruggs, Annie M; Riehl, Timothy C

    2012-06-14

    There is a growing interest for using quantitative EEG and LORETA current source density in clinical and research settings. Importantly, if these indices are to be employed in clinical settings then the reliability of these measures is of great concern. Neuroguide (Applied Neurosciences) is sophisticated software developed for the analyses of power, and connectivity measures of the EEG as well as LORETA current source density. To date there are relatively few data evaluating topographical EEG reliability contrasts for all 19 channels and no studies have evaluated reliability for LORETA calculations. We obtained 4 min eyes-closed and eyes-opened EEG recordings at 30-day intervals. The EEG was analyzed in Neuroguide and FFT power, coherence and phase was computed for traditional frequency bands (delta, theta, alpha and beta) and LORETA current source density was calculated in 1 Hz increments and summed for total power in eight regions of interest (ROI). In order to obtain a robust measure of reliability we utilized a random effects model with an absolute agreement definition. The results show very good reproducibility for total absolute power and coherence. Phase shows lower reliability coefficients. LORETA current source density shows very good reliability with an average 0.81 for ECB and 0.82 for EOB. Similarly, the eight regions of interest show good to very good agreement across time. Implications for future directions and use of qEEG and LORETA in clinical populations are discussed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  15. Evaluation of background parenchymal enhancement on breast MRI: a systematic review

    PubMed Central

    Signori, Alessio; Valdora, Francesca; Rossi, Federica; Calabrese, Massimo; Durando, Manuela; Mariscotto, Giovanna; Tagliafico, Alberto

    2017-01-01

    Objective: To perform a systematic review of the methods used for background parenchymal enhancement (BPE) evaluation on breast MRI. Methods: Studies dealing with BPE assessment on breast MRI were retrieved from major medical libraries independently by four reviewers up to 6 October 2015. The keywords used for database searching are “background parenchymal enhancement”, “parenchymal enhancement”, “MRI” and “breast”. The studies were included if qualitative and/or quantitative methods for BPE assessment were described. Results: Of the 420 studies identified, a total of 52 articles were included in the systematic review. 28 studies performed only a qualitative assessment of BPE, 13 studies performed only a quantitative assessment and 11 studies performed both qualitative and quantitative assessments. A wide heterogeneity was found in the MRI sequences and in the quantitative methods used for BPE assessment. Conclusion: A wide variability exists in the quantitative evaluation of BPE on breast MRI. More studies focused on a reliable and comparable method for quantitative BPE assessment are needed. Advances in knowledge: More studies focused on a quantitative BPE assessment are needed. PMID:27925480

  16. A human reliability based usability evaluation method for safety-critical software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less

  17. Fatty degeneration of the rotator cuff muscles on pre- and postoperative CT arthrography (CTA): is the Goutallier grading system reliable?

    PubMed

    Lee, Eugene; Choi, Jung-Ah; Oh, Joo Han; Ahn, Soyeon; Hong, Sung Hwan; Chai, Jee Won; Kang, Heung Sik

    2013-09-01

    To retrospectively evaluate fatty degeneration (FD) of rotator cuff muscles on CTA using Goutallier's grading system and quantitative measurements with comparison between pre- and postoperative states. IRB approval was obtained for this study. Two radiologists independently reviewed pre- and postoperative CTAs of 43 patients (24 males and 19 females, mean age, 58.1 years) with 46 shoulders confirmed as full-thickness tears with random distribution. FD of supraspinatus, infraspinatus/teres minor, and subscapularis was assessed using Goutallier's system and by quantitative measurements of Hounsfield units (HUs) on sagittal images. Changes in FD grades and HUs were compared between pre- and postoperative CTAs and analyzed with respect to preoperative tear size and postoperative cuff integrity. The correlations between qualitative grades and quantitative measurements and their inter-observer reliabilities were also assessed. There was statistically significant correlation between FD grades and HU measurements of all muscles on pre- and postoperative CTA (p < 0.05). Inter-observer reliability of fatty degeneration grades were excellent to substantial on both pre- and postoperative CTA in supraspinatus (0.8685 and 0.8535) and subscapularis muscles (0.7777 and 0.7972), but fair in infraspinatus/teres minor muscles (0.5791 and 0.5740); however, quantitative Hounsfield units measurements showed excellent reliability for all muscles (ICC: 0.7950 and 0.9346 for SST, 0.7922 and 0.8492 for SSC, and 0.9254 and 0.9052 for IST/TM). No muscle showed improvement of fatty degeneration after surgical repair on qualitative and quantitative assessments; there was no difference in changes of fatty degeneration after surgical repair according to preoperative tear size and post-operative cuff integrity (p > 0.05). The average dose-length product (DLP, mGy · cm) was 365.2 mGy · cm (range, 323.8-417.2 mGy · cm) and estimated average effective dose was 5.1 mSv. Goutallier grades correlated well with HUs of rotator cuff muscles. Reliability was excellent for both systems, except for FD grade of IST/TM muscles, which may be more reliably assessed using quantitative measurements.

  18. Comment on Hall et al. (2017), "How to Choose Between Measures of Tinnitus Loudness for Clinical Research? A Report on the Reliability and Validity of an Investigator-Administered Test and a Patient-Reported Measure Using Baseline Data Collected in a Phase IIa Drug Trial".

    PubMed

    Sabour, Siamak

    2018-03-08

    The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.

  19. [Doppler echocardiography of tricuspid insufficiency. Methods of quantification].

    PubMed

    Loubeyre, C; Tribouilloy, C; Adam, M C; Mirode, A; Trojette, F; Lesbre, J P

    1994-01-01

    Evaluation of tricuspid incompetence has benefitted considerably from the development of Doppler ultrasound. In addition to direct analysis of the valves, which provides information about the mechanism involved, this method is able to provide an accurate evaluation, mainly through use of the Doppler mode. In addition to new criteria being evaluated (mainly the convergence zone of the regurgitant jet), some indices are recognised as good quantitative parameters: extension of the regurgitant jet into the right atrium, anterograde tricuspid flow, laminar nature of the regurgitant flow, analysis of the flow in the supra-hepatic veins, this is only semi-quantitative, since the calculation of the regurgitation fraction from the pulsed Doppler does not seem to be reliable; This accurate semi-quantitative evaluation is made possible by careful and consistent use of all the criteria available. The authors set out to discuss the value of the various evaluation criteria mentioned in the literature and try to define a practical approach.

  20. An Interprofessional Program Evaluation Case Study: Utilizing Multiple Measures To Assess What Matters. AIR 1997 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Delaney, Anne Marie

    This paper reviews the first two years of a model program-evaluation case study which is intended to show: (1) how program evaluation can contribute to academic and professional degree programs; (2) how qualitative and quantitative techniques can be used to produce reliable measures for evaluation studies; and (3) how the role of the institutional…

  1. Using RNA-seq data to select reference genes for normalizing gene expression in apple roots.

    PubMed

    Zhou, Zhe; Cong, Peihua; Tian, Yi; Zhu, Yanmin

    2017-01-01

    Gene expression in apple roots in response to various stress conditions is a less-explored research subject. Reliable reference genes for normalizing quantitative gene expression data have not been carefully investigated. In this study, the suitability of a set of 15 apple genes were evaluated for their potential use as reliable reference genes. These genes were selected based on their low variance of gene expression in apple root tissues from a recent RNA-seq data set, and a few previously reported apple reference genes for other tissue types. Four methods, Delta Ct, geNorm, NormFinder and BestKeeper, were used to evaluate their stability in apple root tissues of various genotypes and under different experimental conditions. A small panel of stably expressed genes, MDP0000095375, MDP0000147424, MDP0000233640, MDP0000326399 and MDP0000173025 were recommended for normalizing quantitative gene expression data in apple roots under various abiotic or biotic stresses. When the most stable and least stable reference genes were used for data normalization, significant differences were observed on the expression patterns of two target genes, MdLecRLK5 (MDP0000228426, a gene encoding a lectin receptor like kinase) and MdMAPK3 (MDP0000187103, a gene encoding a mitogen-activated protein kinase). Our data also indicated that for those carefully validated reference genes, a single reference gene is sufficient for reliable normalization of the quantitative gene expression. Depending on the experimental conditions, the most suitable reference genes can be specific to the sample of interest for more reliable RT-qPCR data normalization.

  2. Using RNA-seq data to select reference genes for normalizing gene expression in apple roots

    PubMed Central

    Zhou, Zhe; Cong, Peihua; Tian, Yi

    2017-01-01

    Gene expression in apple roots in response to various stress conditions is a less-explored research subject. Reliable reference genes for normalizing quantitative gene expression data have not been carefully investigated. In this study, the suitability of a set of 15 apple genes were evaluated for their potential use as reliable reference genes. These genes were selected based on their low variance of gene expression in apple root tissues from a recent RNA-seq data set, and a few previously reported apple reference genes for other tissue types. Four methods, Delta Ct, geNorm, NormFinder and BestKeeper, were used to evaluate their stability in apple root tissues of various genotypes and under different experimental conditions. A small panel of stably expressed genes, MDP0000095375, MDP0000147424, MDP0000233640, MDP0000326399 and MDP0000173025 were recommended for normalizing quantitative gene expression data in apple roots under various abiotic or biotic stresses. When the most stable and least stable reference genes were used for data normalization, significant differences were observed on the expression patterns of two target genes, MdLecRLK5 (MDP0000228426, a gene encoding a lectin receptor like kinase) and MdMAPK3 (MDP0000187103, a gene encoding a mitogen-activated protein kinase). Our data also indicated that for those carefully validated reference genes, a single reference gene is sufficient for reliable normalization of the quantitative gene expression. Depending on the experimental conditions, the most suitable reference genes can be specific to the sample of interest for more reliable RT-qPCR data normalization. PMID:28934340

  3. Reference genes for reverse transcription quantitative PCR in canine brain tissue.

    PubMed

    Stassen, Quirine E M; Riemers, Frank M; Reijmerink, Hannah; Leegwater, Peter A J; Penning, Louis C

    2015-12-09

    In the last decade canine models have been used extensively to study genetic causes of neurological disorders such as epilepsy and Alzheimer's disease and unravel their pathophysiological pathways. Reverse transcription quantitative polymerase chain reaction is a sensitive and inexpensive method to study expression levels of genes involved in disease processes. Accurate normalisation with stably expressed so-called reference genes is crucial for reliable expression analysis. Following the minimum information for publication of quantitative real-time PCR experiments precise guidelines, the expression of ten frequently used reference genes, namely YWHAZ, HMBS, B2M, SDHA, GAPDH, HPRT, RPL13A, RPS5, RPS19 and GUSB was evaluated in seven brain regions (frontal lobe, parietal lobe, occipital lobe, temporal lobe, thalamus, hippocampus and cerebellum) and whole brain of healthy dogs. The stability of expression varied between different brain areas. Using the GeNorm and Normfinder software HMBS, GAPDH and HPRT were the most reliable reference genes for whole brain. Furthermore based on GeNorm calculations it was concluded that as little as two to three reference genes are sufficient to obtain reliable normalisation, irrespective the brain area. Our results amend/extend the limited previously published data on canine brain reference genes. Despite the excellent expression stability of HMBS, GAPDH and HRPT, the evaluation of expression stability of reference genes must be a standard and integral part of experimental design and subsequent data analysis.

  4. Standardizing evaluation of pQCT image quality in the presence of subject movement: qualitative versus quantitative assessment.

    PubMed

    Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B

    2014-02-01

    Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.

  5. Standardizing Evaluation of pQCT Image Quality in the Presence of Subject Movement: Qualitative vs. Quantitative Assessment

    PubMed Central

    Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.

    2013-01-01

    Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875

  6. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  7. Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.

    PubMed

    Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y

    2015-11-01

    Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Development of the quantitative indicator of abdominal examination for clinical application: a pilot study.

    PubMed

    Ko, Seok-Jae; Lee, Hyunju; Kim, Seul-Ki; Kim, Minji; Kim, Jinsung; Lee, Beom-Joon; Park, Jae-Woo

    2015-06-01

    Abdominal examination (AE) is the evaluation of the status of illness by examining the abdominal region in traditional Korean medicine (TKM). Although AE is currently considered an important diagnostic method in TKM, owing to its clinical usage, no studies have been conducted to objectively assess its accuracy and develop standards. Twelve healthy subjects and 21 patients with functional dyspepsia have participated in this study. The patients were classified into epigastric discomfort group (n=11) and epigastric discomfort with tenderness group (n=10) according to the clinical diagnosis by AE. After evaluating the subjective epigastric discomfort in all subjects, two independent clinicians measured the pressure pain threshold (PPT) two times at an acupoint (CV 14) using an algometer. We then assessed the interrater and intrarater reliability of the PPT measurements and evaluated the validity (sensitivity and specificity) via a receiver operating characteristic plot and optimal cutoff value. The results of the interrater reliability test showed a very strong correlation (correlation coefficient range: 0.82-0.91). The results of intrarater reliability test also showed a higher than average correlation (intraclass correlation coefficient: 0.58-0.70). The optimal cutoff value of PPT in the epigastric area was 1.8 kg/cm(2) with 100% sensitivity and 54.54% specificity. PPT measurements in the epigastric area with an algometer demonstrated high reliability and validity for AE, which makes this approach potentially useful in clinical applications as a new quantitative measurement in TKM.

  9. Nondestructive Evaluation for Aerospace Composites

    NASA Technical Reports Server (NTRS)

    Leckey, Cara; Cramer, Elliott; Perey, Daniel

    2015-01-01

    Nondestructive evaluation (NDE) techniques are important for enabling NASA's missions in space exploration and aeronautics. The expanded and continued use of composite materials for aerospace components and vehicles leads to a need for advanced NDE techniques capable of quantitatively characterizing damage in composites. Quantitative damage detection techniques help to ensure safety, reliability and durability of space and aeronautic vehicles. This presentation will give a broad outline of NASA's range of technical work and an overview of the NDE research performed in the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center. The presentation will focus on ongoing research in the development of NDE techniques for composite materials and structures, including development of automated data processing tools to turn NDE data into quantitative location and sizing results. Composites focused NDE research in the areas of ultrasonics, thermography, X-ray computed tomography, and NDE modeling will be discussed.

  10. Regional reliability of quantitative signal targeting with alternating radiofrequency (STAR) labeling of arterial regions (QUASAR).

    PubMed

    Tatewaki, Yasuko; Higano, Shuichi; Taki, Yasuyuki; Thyreau, Benjamin; Murata, Takaki; Mugikura, Shunji; Ito, Daisuke; Takase, Kei; Takahashi, Shoki

    2014-01-01

    Quantitative signal targeting with alternating radiofrequency labeling of arterial regions (QUASAR) is a recent spin labeling technique that could improve the reliability of brain perfusion measurements. Although it is considered reliable for measuring gray matter as a whole, it has never been evaluated regionally. Here we assessed this regional reliability. Using a 3-Tesla Philips Achieva whole-body system, we scanned four times 10 healthy volunteers, in two sessions 2 weeks apart, to obtain QUASAR images. We computed perfusion images and ran a voxel-based analysis within all brain structures. We also calculated mean regional cerebral blood flow (rCBF) within regions of interest configured for each arterial territory distribution. The mean CBF over whole gray matter was 37.74 with intraclass correlation coefficient (ICC) of .70. In white matter, it was 13.94 with an ICC of .30. Voxel-wise ICC and coefficient-of-variation maps showed relatively lower reliability in watershed areas and white matter especially in deeper white matter. The absolute mean rCBF values were consistent with the ones reported from PET, as was the relatively low variability in different feeding arteries. Thus, QUASAR reliability for regional perfusion is high within gray matter, but uncertain within white matter. © 2014 The Authors. Journal of Neuroimaging published by the American Society of Neuroimaging.

  11. Regional Reliability of Quantitative Signal Targeting with Alternating Radiofrequency (STAR) Labeling of Arterial Regions (QUASAR)

    PubMed Central

    Tatewaki, Yasuko; Higano, Shuichi; Taki, Yasuyuki; Thyreau, Benjamin; Murata, Takaki; Mugikura, Shunji; Ito, Daisuke; Takase, Kei; Takahashi, Shoki

    2014-01-01

    BACKGROUND AND PURPOSE Quantitative signal targeting with alternating radiofrequency labeling of arterial regions (QUASAR) is a recent spin labeling technique that could improve the reliability of brain perfusion measurements. Although it is considered reliable for measuring gray matter as a whole, it has never been evaluated regionally. Here we assessed this regional reliability. METHODS Using a 3-Tesla Philips Achieva whole-body system, we scanned four times 10 healthy volunteers, in two sessions 2 weeks apart, to obtain QUASAR images. We computed perfusion images and ran a voxel-based analysis within all brain structures. We also calculated mean regional cerebral blood flow (rCBF) within regions of interest configured for each arterial territory distribution. RESULTS The mean CBF over whole gray matter was 37.74 with intraclass correlation coefficient (ICC) of .70. In white matter, it was 13.94 with an ICC of .30. Voxel-wise ICC and coefficient-of-variation maps showed relatively lower reliability in watershed areas and white matter especially in deeper white matter. The absolute mean rCBF values were consistent with the ones reported from PET, as was the relatively low variability in different feeding arteries. CONCLUSIONS Thus, QUASAR reliability for regional perfusion is high within gray matter, but uncertain within white matter. PMID:25370338

  12. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  13. Quantitative methods in assessment of neurologic function.

    PubMed

    Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J

    1981-01-01

    Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.

  14. Quantitative Nondestructive Evaluation

    DTIC Science & Technology

    1979-10-01

    reliability has been discussed by a number of researchers, including Pachman, et. al. [25,28], Hastings [29], Ehret [30], Kaplan and Reiman [31], and...123 REFERENCES (Continued) 31. Kaplan, M.P. and Reiman , J.A. "Use of Fracture Mechanics in Estimating Structural Life and Inspection Intervals

  15. An Examination of the Predictive Relationships of Self-Evaluation Capacity and Staff Competency on Strategic Planning in Hong Kong Aided Secondary Schools

    ERIC Educational Resources Information Center

    Cheng, Eric C. K.

    2011-01-01

    This article aims to examine the predictive relationships of self-evaluation capacity and staff competency on the effect of strategic planning in aided secondary schools in Hong Kong. A quantitative questionnaire survey was compiled to collect data from principals of the participating schools. Confirmatory factor analysis and reliability tests…

  16. Differences between genders in colorectal morphology on CT colonography using a quantitative approach: a pilot study.

    PubMed

    Weber, Charles N; Poff, Jason A; Lev-Toaff, Anna S; Levine, Marc S; Zafar, Hanna M

    To explore quantitative differences between genders in morphologic colonic metrics and determine metric reproducibility. Quantitative colonic metrics from 20 male and 20 female CTC datasets were evaluated twice by two readers; all exams were performed after incomplete optical colonoscopy. Intra-/inter-reader reliability was measured with intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). Women had overall decreased colonic volume, increased tortuosity and compactness and lower sigmoid apex height on CTC compared to men (p<0.0001,all). Quantitative measurements in colonic metrics were highly reproducible (ICC=0.9989 and 0.9970; CCC=0.9945). Quantitative morphologic differences between genders can be reproducibility measured. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Reliability of intra-oral quantitative sensory testing (QST) in patients with atypical odontalgia and healthy controls - a multicentre study.

    PubMed

    Baad-Hansen, L; Pigg, M; Yang, G; List, T; Svensson, P; Drangsholt, M

    2015-02-01

    The reliability of comprehensive intra-oral quantitative sensory testing (QST) protocol has not been examined systematically in patients with chronic oro-facial pain. The aim of the present multicentre study was to examine test-retest and interexaminer reliability of intra-oral QST measures in terms of absolute values and z-scores as well as within-session coefficients of variation (CV) values in patients with atypical odontalgia (AO) and healthy pain-free controls. Forty-five patients with AO and 68 healthy controls were subjected to bilateral intra-oral gingival QST and unilateral extratrigeminal QST (thenar) on three occasions (twice on 1 day by two different examiners and once approximately 1 week later by one of the examiners). Intra-class correlation coefficients and kappa values for interexaminer and test-retest reliability were computed. Most of the standardised intra-oral QST measures showed fair to excellent interexaminer (9-12 of 13 measures) and test-retest (7-11 of 13 measures) reliability. Furthermore, no robust differences in reliability measures or within-session variability (CV) were detected between patients with AO and the healthy reference group. These reliability results in chronic orofacial pain patients support earlier suggestions based on data from healthy subjects that intra-oral QST is sufficiently reliable for use as a part of a comprehensive evaluation of patients with somatosensory disturbances or neuropathic pain in the trigeminal region. © 2014 John Wiley & Sons Ltd.

  18. [Classical and molecular methods for identification and quantification of domestic moulds].

    PubMed

    Fréalle, E; Bex, V; Reboux, G; Roussel, S; Bretagne, S

    2017-12-01

    To study the impact of the constant and inevitable inhalation of moulds, it is necessary to sample, identify and count the spores. Environmental sampling methods can be separated into three categories: surface sampling is easy to perform but non quantitative, air sampling is easy to calibrate but provides time limited information, and dust sampling which is more representative of long term exposure to moulds. The sampling strategy depends on the objectives (evaluation of the risk of exposure for individuals; quantification of the household contamination; evaluation of the efficacy of remediation). The mould colonies obtained in culture are identified using microscopy, Maldi-TOF, and/or DNA sequencing. Electrostatic dust collectors are an alternative to older methods for identifying and quantifying household mould spores. They are easy to use and relatively cheap. Colony counting should be progressively replaced by quantitative real-time PCR, which is already validated, while waiting for more standardised high throughput sequencing methods for assessment of mould contamination without technical bias. Despite some technical recommendations for obtaining reliable and comparable results, the huge diversity of environmental moulds, the variable quantity of spores inhaled and the association with other allergens (mites, plants) make the evaluation of their impact on human health difficult. Hence there is a need for reliable and generally applicable quantitative methods. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  19. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  20. Quantitative Evaluation of the Use of Actigraphy for Neurological and Psychiatric Disorders

    PubMed Central

    Song, Yu; Kwak, Shin; Yoshida, Sohei; Yamamoto, Yoshiharu

    2014-01-01

    Quantitative and objective evaluation of disease severity and/or drug effect is necessary in clinical practice. Wearable accelerometers such as an actigraph enable long-term recording of a patient's movement during activities and they can be used for quantitative assessment of symptoms due to various diseases. We reviewed some applications of actigraphy with analytical methods that are sufficiently sensitive and reliable to determine the severity of diseases and disorders such as motor and nonmotor disorders like Parkinson's disease, sleep disorders, depression, behavioral and psychological symptoms of dementia (BPSD) for vascular dementia (VD), seasonal affective disorder (SAD), and stroke, as well as the effects of drugs used to treat them. We believe it is possible to develop analytical methods to assess more neurological or psychopathic disorders using actigraphy records. PMID:25214709

  1. Reliability of Various Measurement Stations for Determining Plantar Fascia Thickness and Echogenicity.

    PubMed

    Bisi-Balogun, Adebisi; Cassel, Michael; Mayer, Frank

    2016-04-13

    This study aimed to determine the relative and absolute reliability of ultrasound (US) measurements of the thickness and echogenicity of the plantar fascia (PF) at different measurement stations along its length using a standardized protocol. Twelve healthy subjects (24 feet) were enrolled. The PF was imaged in the longitudinal plane. Subjects were assessed twice to evaluate the intra-rater reliability. A quantitative evaluation of the thickness and echogenicity of the plantar fascia was performed using Image J, a digital image analysis and viewer software. A sonography evaluation of the thickness and echogenicity of the PF showed a high relative reliability with an Intra class correlation coefficient of ≥0.88 at all measurement stations. However, the measurement stations for both the PF thickness and echogenicity which showed the highest intraclass correlation coefficient (ICCs) did not have the highest absolute reliability. Compared to other measurement stations, measuring the PF thickness at 3 cm distal and the echogenicity at a region of interest 1 cm to 2 cm distal from its insertion at the medial calcaneal tubercle showed the highest absolute reliability with the least systematic bias and random error. Also, the reliability was higher using a mean of three measurements compared to one measurement. To reduce discrepancies in the interpretation of the thickness and echogenicity measurements of the PF, the absolute reliability of the different measurement stations should be considered in clinical practice and research rather than the relative reliability with the ICC.

  2. Reliability of Various Measurement Stations for Determining Plantar Fascia Thickness and Echogenicity

    PubMed Central

    Bisi-Balogun, Adebisi; Cassel, Michael; Mayer, Frank

    2016-01-01

    This study aimed to determine the relative and absolute reliability of ultrasound (US) measurements of the thickness and echogenicity of the plantar fascia (PF) at different measurement stations along its length using a standardized protocol. Twelve healthy subjects (24 feet) were enrolled. The PF was imaged in the longitudinal plane. Subjects were assessed twice to evaluate the intra-rater reliability. A quantitative evaluation of the thickness and echogenicity of the plantar fascia was performed using Image J, a digital image analysis and viewer software. A sonography evaluation of the thickness and echogenicity of the PF showed a high relative reliability with an Intra class correlation coefficient of ≥0.88 at all measurement stations. However, the measurement stations for both the PF thickness and echogenicity which showed the highest intraclass correlation coefficient (ICCs) did not have the highest absolute reliability. Compared to other measurement stations, measuring the PF thickness at 3 cm distal and the echogenicity at a region of interest 1 cm to 2 cm distal from its insertion at the medial calcaneal tubercle showed the highest absolute reliability with the least systematic bias and random error. Also, the reliability was higher using a mean of three measurements compared to one measurement. To reduce discrepancies in the interpretation of the thickness and echogenicity measurements of the PF, the absolute reliability of the different measurement stations should be considered in clinical practice and research rather than the relative reliability with the ICC. PMID:27089369

  3. Reliability and validity of a quantitative color scale to evaluate masticatory performance using color-changeable chewing gum.

    PubMed

    Hama, Yohei; Kanazawa, Manabu; Minakuchi, Shunsuke; Uchida, Tatsuro; Sasaki, Yoshiyuki

    2014-03-19

    In the present study, we developed a novel color scale for visual assessment, conforming to theoretical color changes of a gum, to evaluate masticatoryperformance; moreover, we investigated the reliability and validity of this evaluation method using the color scale. Ten participants (aged 26.30 years) with natural dentition chewed the gum at several chewing strokes. Changes in color were measured using a colorimeter, and then, linearregression expressions that represented changes in gum color were derived. The color scale was developed using these regression expressions. Thirty-two chewed gums were evaluated using colorimeter and were assessed three times using the color scale by six dentists aged 25.27 (mean, 25.8) years, six preclinical dental students aged 21.23 (mean, 22.2) years, and six elderly individuals aged 68.84 (mean, 74.0) years. The intrarater and interrater reliability of evaluations was assessed using intraclass correlation coefficients. Validity of the method compared with a colorimeter was assessed using Spearman's rank correlation coefficient. All intraclass correlation coefficients were > 0.90, and Spearman's rank-correlation coefficients were > 0.95 in all groups. These results indicated that the evaluation method of the color-changeable chewing gum using the newly developed color scale is reliable and valid.

  4. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  5. Networked Resources, Assessment and Collection Development

    ERIC Educational Resources Information Center

    Samson, Sue; Derry, Sebastian; Eggleston, Holly

    2004-01-01

    This project provides a critical evaluation of networked resources as they relate to the library's collection development policy, identifies areas of the curriculum not well represented, establishes a reliable method of assessing usage across all resources, and develops a framework of quantitative data for collection development decision making.

  6. Concurrent validation and reliability of digital image analysis of granulation tissue color for clinical pressure ulcers.

    PubMed

    Iizaka, Shinji; Sugama, Junko; Nakagami, Gojiro; Kaitani, Toshiko; Naito, Ayumi; Koyanagi, Hiroe; Matsuo, Junko; Kadono, Takafumi; Konya, Chizuko; Sanada, Hiromi

    2011-01-01

    Granulation tissue color is one indicator for pressure ulcer (PU) assessment. However, it entails a subjective evaluation only, and quantitative methods have not been established. We developed color indicators from digital image analysis and investigated their concurrent validity and reliability for clinical PUs. A cross-sectional study was conducted on 47 patients with 55 full-thickness PUs. After color calibration, a wound photograph was converted into three images representing red color: erythema index (EI), modified erythema index with additional color calibration (granulation red index [GRI]), and , which represents the artificially created red-green axis of L(*) a(*) b(*) color space. The mean intensity of the granulation tissue region and the percentage of pixels exceeding the optimal cutoff intensity (% intensity) were calculated. Mean GRI (ρ=0.39, p=0.007) and (ρ=0.55, p<0.001), as well as their % intensity indicators, showed positive correlations with a(*) measured by tristimulus colorimeter, but erythema index did not. They were correlated with hydroxyproline concentration in wound fluid, healthy granulation tissue area, and blood hemoglobin level. Intra- and interrater reliability of the indicator calculation using both GRI and had an intraclass correlation coefficient >0.9. GRI and from digital image analysis can quantitatively evaluate granulation tissue color of clinical PUs. © 2011 by the Wound Healing Society.

  7. 78 FR 63036 - Transmission Planning Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... blend of specific quantitative and qualitative parameters for the permissible use of planned non... circumstances, Reliability Standard TPL-001-4 provides a blend of specific quantitative and qualitative... considerations, such as costs and alternatives, guards against a determination based solely on a quantitative...

  8. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  9. Quantitative evaluation of the viscoelastic properties of the ankle joint complex in patients suffering from ankle sprain by the anterior drawer test.

    PubMed

    Lin, Che-Yu; Shau, Yio-Wha; Wang, Chung-Li; Chai, Huei-Ming; Kang, Jiunn-Horng

    2013-06-01

    Biological tissues such as ligaments exhibit viscoelastic behaviours. Injury to the ligament may induce changes of these viscoelastic properties, and these changes could serve as biomarkers to detect the injury. In the present study, a novel instrument was developed to non-invasive quantify the viscoelastic properties of the ankle in vivo by the anterior drawer test. The purpose of the study was to investigate the reliability of the instrument and to compare the viscoelastic properties of the ankle between patients suffering from ankle sprain and controls. Eight patients and eight controls participated in the present study. The reliability test was performed on three randomly chosen subjects. In patient and control test, both ankles of each subject were tested to evaluate the viscoelastic properties of the ankle. The viscosity index was defined for quantitatively evaluating the viscosity of the ankle. Greater viscosity index was associated with lower viscosity. Injured and uninjured ankles of patient and both ankles of controls were compared. The instrument exhibited excellent test-retest reliability (r > 0.9). Injured ankles exhibited significantly less viscosity than uninjured ankles, since injured ankles of patients had significantly higher viscosity index (8,148 ± 5,266) compared with uninjured ankles of patients (948 ± 617; p = 0.008) and controls (1,326 ± 613; p < 0.001). The study revealed that the viscoelastic properties of the ankle can serve as sensitive and useful clinical biomarkers to differentiate between injured and uninjured ankles. The method may provide a clinical examination for objectively evaluating lateral ankle ligament injuries.

  10. Inter-rater reliability of motor unit number estimates and quantitative motor unit analysis in the tibialis anterior muscle.

    PubMed

    Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L

    2009-05-01

    To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.

  11. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  12. Validity, reliability, and generalizability in qualitative research

    PubMed Central

    Leung, Lawrence

    2015-01-01

    In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies. PMID:26288766

  13. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  14. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...

  15. [Stereovideographic evaluation of the postural geometry of healthy and scoliotic patients].

    PubMed

    De la Huerta, F; Leroux, M A; Zabjek, K F; Coillard, C; Rivard, C H

    1998-01-01

    Idiopathic scoliosis principally characterised by a deformation of the vertebral column can also be associated to postural abnormalities. The validity and reliability of current quantitative postural evaluations has not been thoroughly documented, frequently limited by a two dimensional view of the patient, and do not include the whole posture of the patient. The purpose of this study is to 1) quantify within and between-session reliability of a stereovideographic Postural Geometry (PG) evaluation and 2) to investigate the sensitivity of this technique for the postural evaluation of scoliosis patients. The PG of 14 control subjects and 9 untreated scoliosis patients were evaluated with 5 repeat trials, on two occasions. Postural geometry parameters that describe the position and orientation of the pelvis, trunk, scapular girdle and head were calculated based on the 3-dimensional co-ordinates of anatomical landmarks. The mean between and within-session variability across all parameters were 12.5 mm, 2.8 degrees and 5.4 mm and 1.4 degrees respectively. The patient group was heterogeneous with some noted pathological characteristics. This global stereovideographic postural geometry evaluation appears to demonstrate sufficient reliability and sensitivity to follow-up on the posture of scoliosis patients.

  16. Reliability, validity and feasibility of nail ultrasonography in psoriatic arthritis.

    PubMed

    Arbault, Anaïs; Devilliers, Hervé; Laroche, Davy; Cayot, Audrey; Vabres, Pierre; Maillefert, Jean-Francis; Ornetti, Paul

    2016-10-01

    To determine the feasibility, reliability and validity of nails ultrasonography in psoriatic arthritis as an outcome measure. Pilot prospective single-centre study of eight ultrasonography parameters in B mode and power Doppler concerning the distal interphalangeal (DIP) joint, the matrix, the bed and nail plate. Intra-observer and inter-observer reliability was evaluated for the seven quantitative parameters (ICC and kappa). Correlations between ultrasonography and clinical variables were searched to assess external validity. Feasibility was assessed by the time to carry out the examination and the percentage of missing data. Twenty-seven patients with psoriatic arthritis (age 55.0±16.2 years, disease duration 13.4±9.4 years) were included. Of these, 67% presented nail involvement on ultrasonography vs 37% on physical examination (P<0.05). Reliability was good (ICC and weighted kappa>0.75) for the seven quantitative parameters, except for synovitis of the DIP joint in B mode. The synovitis of the DIP joint revealed by ultrasonography correlated with the total number of clinical synovitis and Doppler US of the nail (matrix and bed). Doppler US of the matrix correlated with VAS pain but not with the ASDAS-CRP or with clinical enthesitis. No significant correlation was found with US nail thickness. The feasibility and reliability of ultrasonography of the nail in psoriatic arthritis appear to be satisfactory. Among the eight parameters evaluated, power Doppler of the matrix which correlated with local inflammation (DIP joint and bed) and with VAS pain could become an interesting outcome measure, provided that it is also sensitive to change. Copyright © 2015 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  17. Human figure drawings in the evaluation of severe adolescent suicidal behavior.

    PubMed

    Zalsman, G; Netanel, R; Fischel, T; Freudenstein, O; Landau, E; Orbach, I; Weizman, A; Pfeffer, C R; Apter, A

    2000-08-01

    To evaluate the reliability of using certain indicators derived from human figure drawings to distinguish between suicidal and nonsuicidal adolescents. Ninety consecutive admissions to an adolescent inpatient unit were assessed. Thirty-nine patients were admitted because of suicidal behavior and 51 for other reasons. All subjects were given the Human Figure Drawing (HFD) test. HFD was evaluated according to the method of Pfeffer and Richman, and the degree of suicidal behavior was rated by the Child Suicide Potential Scale. The internal reliability was satisfactory. HFD indicators correlated significantly with quantitative measures of suicidal behavior; of these indicators specifically, overall impression of the evaluator enabled the prediction of suicidal behavior and the distinction between suicidal and nonsuicidal inpatients (p < .001). A group of graphic indicators derived from a discriminant analysis formed a function, which was able to identify 84.6% of the suicidal and 76.6% of the nonsuicidal adolescents correctly. Many of the items had a regressive quality. The HFD is an example of a simple projective test that may have empirical reliability. It may be useful for the assessment of severe suicidal behavior in adolescents.

  18. Exploring a taxonomy for aggression against women: can it aid conceptual clarity?

    PubMed

    Cook, Sarah; Parrott, Dominic

    2009-01-01

    The assessment of aggression against women is demanding primarily because assessment strategies do not share a common language to describe reliably the wide range of forms of aggression women experience. The lack of a common language impairs efforts to describe these experiences, understand causes and consequences of aggression against women, and develop effective intervention and prevention efforts. This review accomplishes two goals. First, it applies a theoretically and empirically based taxonomy to behaviors assessed by existing measurement instruments. Second, it evaluates whether the taxonomy provides a common language for the field. Strengths of the taxonomy include its ability to describe and categorize all forms of aggression found in existing quantitative measures. The taxonomy also classifies numerous examples of aggression discussed in the literature but notably absent from quantitative measures. Although we use existing quantitative measures as a starting place to evaluate the taxonomy, its use is not limited to quantitative methods. Implications for theory, research, and practice are discussed.

  19. Evaluation of Quantitative Environmental Stress Screening (ESS) Methods. Volume 1

    DTIC Science & Technology

    1991-11-01

    required information on screening strength from the curvefitting parameters. The underlying theory and approach taken are discussed in Appendix A. To...in field 1020 arm St.,Vas AJ Ptr 0.4i Currpnt. c- ugt ~ing DU?/SYS 2.7264 Wll/IY.3 at Factory Stress- NaxiLMw outgoing W?is’ys 0.288 DrW/5?S at Field...182 125 K.W.Fertig and V.X. Murthy, Models for Reliability Growth During Burn-in: Theory and Applicat’ons,Proceedings 1978 Annual Reliability and

  20. Validity and reliability of the Paprosky acetabular defect classification.

    PubMed

    Yu, Raymond; Hofstaetter, Jochen G; Sullivan, Thomas; Costi, Kerry; Howie, Donald W; Solomon, Lucian B

    2013-07-01

    The Paprosky acetabular defect classification is widely used but has not been appropriately validated. Reliability of the Paprosky system has not been evaluated in combination with standardized techniques of measurement and scoring. This study evaluated the reliability, teachability, and validity of the Paprosky acetabular defect classification. Preoperative radiographs from a random sample of 83 patients undergoing 85 acetabular revisions were classified by four observers, and their classifications were compared with quantitative intraoperative measurements. Teachability of the classification scheme was tested by dividing the four observers into two groups. The observers in Group 1 underwent three teaching sessions; those in Group 2 underwent one session and the influence of teaching on the accuracy of their classifications was ascertained. Radiographic evaluation showed statistically significant relationships with intraoperative measurements of anterior, medial, and superior acetabular defect sizes. Interobserver reliability improved substantially after teaching and did not improve without it. The weighted kappa coefficient went from 0.56 at Occasion 1 to 0.79 after three teaching sessions in Group 1 observers, and from 0.49 to 0.65 after one teaching session in Group 2 observers. The Paprosky system is valid and shows good reliability when combined with standardized definitions of radiographic landmarks and a structured analysis. Level II, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.

  1. Using RNA-Seq data to select refence genes for normalizing gene expression in apple roots

    USDA-ARS?s Scientific Manuscript database

    Gene expression in apple roots in response to various stress conditions is a less-explored research subject. Reliable reference genes for normalizing quantitative gene expression data have not been carefully investigated. In this study, the suitability of a set of 15 apple genes were evaluated for t...

  2. Evaluation of fecal indicator and pathogenic bacteria originating from swine manure applied to agricultural lands using culture-based and quantitative real-time PCR methods.

    EPA Science Inventory

    Fecal bacteria, including those originating from concentrated animal feeding operations, are a leading contributor to water quality impairments in agricultural areas. Rapid and reliable methods are needed that can accurately characterize fecal pollution in agricultural settings....

  3. Evaluation of Fecal Indicator and Pathogenic Bacteria Originating from Swine Manure Applied to Agricultural Lands Using Culture-Based and Quantitative Real-Time PCR Methods

    EPA Science Inventory

    Fecal bacteria, including those originating from concentrated animal feeding operations, are a leading contributor to water quality impairments in agricultural areas. Rapid and reliable methods are needed that can accurately characterize fecal pollution in agricultural settings....

  4. Assessing Motivation To Read. Instructional Resource No. 14.

    ERIC Educational Resources Information Center

    Gambrell, Linda B.; And Others

    The Motivation to Read Profile (MRP) is a public-domain instrument designed to provide teachers with an efficient and reliable way to assess reading motivation qualitatively and quantitatively by evaluating students' self-concept as readers and the value they place on reading. The MRP consists of two basic instruments: the Reading Survey (a…

  5. Low level vapor verification of monomethyl hydrazine

    NASA Technical Reports Server (NTRS)

    Mehta, Narinder

    1990-01-01

    The vapor scrubbing system and the coulometric test procedure for the low level vapor verification of monomethyl hydrazine (MMH) are evaluated. Experimental data on precision, efficiency of the scrubbing liquid, instrument response, detection and reliable quantitation limits, stability of the vapor scrubbed solution, and interference were obtained to assess the applicability of the method for the low ppb level detection of the analyte vapor in air. The results indicated that the analyte vapor scrubbing system and the coulometric test procedure can be utilized for the quantitative detection of low ppb level vapor of MMH in air.

  6. Path selection system simulation and evaluation for a Martian roving vehicle

    NASA Technical Reports Server (NTRS)

    Boheim, S. L.; Prudon, W. C.

    1972-01-01

    The simulation and evaluation of proposed path selection systems for an autonomous Martian roving vehicle was developed. The package incorporates a number of realistic features, such as the simulation of random effects due to vehicle bounce and sensor-reading uncertainty, to increase the reliability of the results. Qualitative and quantitative evaluation criteria were established. The performance of three different path selection systems was evaluated to determine the effectiveness of the simulation package, and to form some preliminary conclusions regarding the tradeoffs involved in a path selection system design.

  7. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  8. Performance Evaluation and Quantitative Accuracy of Multipinhole NanoSPECT/CT Scanner for Theranostic Lu-177 Imaging

    NASA Astrophysics Data System (ADS)

    Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung

    2018-06-01

    SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.

  9. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  10. Reference genes for measuring mRNA expression.

    PubMed

    Dundas, Jitesh; Ling, Maurice

    2012-12-01

    The aim of this review is to find answers to some of the questions surrounding reference genes and their reliability for quantitative experiments. Reference genes are assumed to be at a constant expression level, over a range of conditions such as temperature. These genes, such as GADPH and beta-actin, are used extensively for gene expression studies using techniques like quantitative PCR. There have been several studies carried out on identifying reference genes. However, a lot of evidence indicates issues to the general suitability of these genes. Recent studies had shown that different factors, including the environment and methods, play an important role in changing the expression levels of the reference genes. Thus, we conclude that there is no reference gene that can deemed suitable for all the experimental conditions. In addition, we believe that every experiment will require the scientific evaluation and selection of the best candidate gene for use as a reference gene to obtain reliable scientific results.

  11. Development and psychometric evaluation of a quantitative measure of "fat talk".

    PubMed

    MacDonald Clarke, Paige; Murnen, Sarah K; Smolak, Linda

    2010-01-01

    Based on her anthropological research, Nichter (2000) concluded that it is normative for many American girls to engage in body self-disparagement in the form of "fat talk." The purpose of the present two studies was to develop a quantitative measure of fat talk. A series of 17 scenarios were created in which "Naomi" is talking with a female friend(s) and there is an expression of fat talk. College women respondents rated the frequency with which they would behave in a similar way as the women in each scenario. A nine-item one-factor scale was determined through principal components analysis and its scores yielded evidence of internal consistency reliability, test-retest reliability over a five-week time period, construct validity, discriminant validity, and incremental validity in that it predicted unique variance in body shame and eating disorder symptoms above and beyond other measures of self-objectification. Copyright 2009 Elsevier Ltd. All rights reserved.

  12. Three-phase bone scintigraphy for diagnosis of Charcot neuropathic osteoarthropathy in the diabetic foot - does quantitative data improve diagnostic value?

    PubMed

    Fosbøl, M; Reving, S; Petersen, E H; Rossing, P; Lajer, M; Zerahn, B

    2017-01-01

    To investigate whether inclusion of quantitative data on blood flow distribution compared with visual qualitative evaluation improve the reliability and diagnostic performance of 99 m Tc-hydroxymethylene diphosphate three-phase bone scintigraphy (TPBS) in patients suspected for charcot neuropathic osteoarthropathy (CNO) of the foot. A retrospective cohort study of TPBS performed on 148 patients with suspected acute CNO referred from a single specialized diabetes care centre. The quantitative blood flow distribution was calculated based on the method described by Deutsch et al. All scintigraphies were re-evaluated by independent, blinded observers twice with and without quantitative data on blood flow distribution at ankle and focus level, respectively. The diagnostic validity of TPBS was determined by subsequent review of clinical data and radiological examinations. A total of 90 patients (61%) had confirmed diagnosis of CNO. The sensitivity, specificity and accuracy of three-phase bone scintigraphy without/with quantitative data were 89%/88%, 58%/62% and 77%/78%, respectively. The intra-observer agreement improved significantly by adding quantitative data in the evaluation (Kappa value 0·79/0·94). The interobserver agreement was not significantly improved. Adding quantitative data on blood flow distribution in the interpretation of TBPS improves intra-observer variation, whereas no difference in interobserver variation was observed. The sensitivity of TPBS in the diagnosis of CNO is high, but holds limited specificity. Diagnostic performance does not improve using quantitative data in the evaluation. This may be due to the reference intervals applied in the study or the absence of a proper gold standard diagnostic procedure for comparison. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  13. A novel multi-walled carbon nanotube-based antibody conjugate for quantitative and semi-quantitative lateral flow assays.

    PubMed

    Sun, Wenjuan; Hu, Xiaolong; Liu, Jia; Zhang, Yurong; Lu, Jianzhong; Zeng, Libo

    2017-10-01

    In this study, the multi-walled carbon nanotubes (MWCNTs) were applied in lateral flow strips (LFS) for semi-quantitative and quantitative assays. Firstly, the solubility of MWCNTs was improved using various surfactants to enhance their biocompatibility for practical application. The dispersed MWCNTs were conjugated with the methamphetamine (MET) antibody in a non-covalent manner and then manufactured into the LFS for the quantitative detection of MET. The MWCNTs-based lateral flow assay (MWCNTs-LFA) exhibited an excellent linear relationship between the values of test line and MET when its concentration ranges from 62.5 to 1500 ng/mL. The sensitivity of the LFS was evaluated by conjugating MWCNTs with HCG antibody and the MWCNTs conjugated method is 10 times more sensitive than the one conjugated with classical colloidal gold nanoparticles. Taken together, our data demonstrate that MWCNTs-LFA is a more sensitive and reliable assay for semi-quantitative and quantitative detection which can be used in forensic analysis.

  14. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  15. The Reliability and Validity of Discrete and Continuous Measures of Psychopathology: A Quantitative Review

    ERIC Educational Resources Information Center

    Markon, Kristian E.; Chmielewski, Michael; Miller, Christopher J.

    2011-01-01

    In 2 meta-analyses involving 58 studies and 59,575 participants, we quantitatively summarized the relative reliability and validity of continuous (i.e., dimensional) and discrete (i.e., categorical) measures of psychopathology. Overall, results suggest an expected 15% increase in reliability and 37% increase in validity through adoption of a…

  16. Selection of reliable reference genes for normalization of quantitative RT-PCR from different developmental stages and tissues in amphioxus

    PubMed Central

    Zhang, Qi-Lin; Zhu, Qian-Hua; Liao, Xin; Wang, Xiu-Qiang; Chen, Tao; Xu, Han-Ting; Wang, Juan; Yuan, Ming-Long; Chen, Jun-Yuan

    2016-01-01

    Amphioxus is a closest living proxy to the ancestor of cephalochordates with vertebrates, and key animal for novel understanding in the evolutionary origin of vertebrate body plan, genome, tissues and immune system. Reliable analyses using quantitative real-time PCR (qRT-PCR) for answering these scientific questions is heavily dependent on reliable reference genes (RGs). In this study, we evaluated stability of thirteen candidate RGs in qRT-PCR for different developmental stages and tissues of amphioxus by four independent (geNorm, NormFinder, BestKeeper and deltaCt) and one comparative algorithms (RefFinder). The results showed that the top two stable RGs were the following: (1) S20 and 18 S in thirteen developmental stages, (2) EF1A and ACT in seven normal tissues, (3) S20 and L13 in both intestine and hepatic caecum challenged with lipopolysaccharide (LPS), and (4) S20 and EF1A in gill challenged with LPS. The expression profiles of two target genes (EYA and HHEX) in thirteen developmental stages were used to confirm the reliability of chosen RGs. This study identified optimal RGs that can be used to accurately measure gene expression under these conditions, which will benefit evolutionary and functional genomics studies in amphioxus. PMID:27869224

  17. Selection of reference genes for RT-qPCR analysis in the monarch butterfly, Danaus plexippus (L.), a migrating bio-indicator

    USDA-ARS?s Scientific Manuscript database

    Quantitative real-time PCR (qRT-PCR) is a reliable and reproducible technique for measuring and evaluating changes in gene expression. To facilitate gene expression studies and obtain more accurate qRT-PCR data, normalization relative to stable housekeeping genes is required. In this study, expres...

  18. An Examination of Rater Performance on a Local Oral English Proficiency Test: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Yan, Xun

    2014-01-01

    This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…

  19. Further Evidence of Complex Motor Dysfunction in Drug Naive Children with Autism Using Automatic Motion Analysis of Gait

    ERIC Educational Resources Information Center

    Nobile, Maria; Perego, Paolo; Piccinini, Luigi; Mani, Elisa; Rossi, Agnese; Bellina, Monica; Molteni, Massimo

    2011-01-01

    In order to increase the knowledge of locomotor disturbances in children with autism, and of the mechanism underlying them, the objective of this exploratory study was to reliably and quantitatively evaluate linear gait parameters (spatio-temporal and kinematic parameters), upper body kinematic parameters, walk orientation and smoothness using an…

  20. Composing, Analyzing and Validating Software Models

    NASA Astrophysics Data System (ADS)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  1. Composing, Analyzing and Validating Software Models

    NASA Technical Reports Server (NTRS)

    Sheldon, Frederick T.

    1998-01-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  2. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  3. Reliability and validity of Edinburgh visual gait score as an evaluation tool for children with cerebral palsy.

    PubMed

    Del Pilar Duque Orozco, Maria; Abousamra, Oussama; Church, Chris; Lennon, Nancy; Henley, John; Rogers, Kenneth J; Sees, Julieanne P; Connor, Justin; Miller, Freeman

    2016-09-01

    Assessment of gait abnormalities in cerebral palsy (CP) is challenging, and access to instrumented gait analysis is not always feasible. Therefore, many observational gait analysis scales have been devised. This study aimed to evaluate the interobserver reliability, intraobserver reliability, and validity of Edinburgh visual gait score (EVGS). Video of 30 children with spastic CP were reviewed by 7 raters (10 children each in GMFCS levels I, II, and III, age 6-12 years). Three observers had high level of experience in gait analysis (10+ years), two had medium level (2-5 years) and two had no previous experience (orthopedic fellows). Interobserver reliability was evaluated using percentage of complete agreement and kappa values. Criterion validity was evaluated by comparing EVGS scores with 3DGA data taken from the same video visit. Interobserver agreement was 60-90% and Kappa values were 0.18-0.85 for the 17 items in EVGS. Reliability was higher for distal segments (foot/ankle/knee 63-90%; trunk/pelvis/hip 60-76%), with greater experience (high 66-91%, medium 62-90%, no-experience 41-87%), with more EVGS practice (1st 10 videos 52-88%, last 10 videos 64-97%) and when used with higher functioning children (GMFCS I 65-96%, II 58-90%, III 35-65%). Intraobserver agreement was 64-92%. Agreement between EVGS and 3DGA was 52-73%. We believe that having EVGS as part of the standardized gait evaluation is helpful in optimizing the visual scoring. EVGS can be a supportive tool that adds quantitative data instead of only qualitative assessment to a video only gait evaluation. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. New approaches for the analysis of confluent cell layers with quantitative phase digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Pohl, L.; Kaiser, M.; Ketelhut, S.; Pereira, S.; Goycoolea, F.; Kemper, Björn

    2016-03-01

    Digital holographic microscopy (DHM) enables high resolution non-destructive inspection of technical surfaces and minimally-invasive label-free live cell imaging. However, the analysis of confluent cell layers represents a challenge as quantitative DHM phase images in this case do not provide sufficient information for image segmentation, determination of the cellular dry mass or calculation of the cell thickness. We present novel strategies for the analysis of confluent cell layers with quantitative DHM phase contrast utilizing a histogram based-evaluation procedure. The applicability of our approach is illustrated by quantification of drug induced cell morphology changes and it is shown that the method is capable to quantify reliable global morphology changes of confluent cell layers.

  5. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  6. ``The perceptual bases of speaker identity'' revisited

    NASA Astrophysics Data System (ADS)

    Voiers, William D.

    2003-10-01

    A series of experiments begun 40 years ago [W. D. Voiers, J. Acoust. Soc. Am. 36, 1065-1073 (1964)] was concerned with identifying the perceived voice traits (PVTs) on which human recognition of voices depends. It culminated with the development of a voice taxonomy based on 20 PVTs and a set of highly reliable rating scales for classifying voices with respect to those PVTs. The development of a perceptual voice taxonomy was motivated by the need for a practical method of evaluating speaker recognizability in voice communication systems. The Diagnostic Speaker Recognition Test (DSRT) evaluates the effects of systems on speaker recognizability as reflected in changes in the inter-listener reliability of voice ratings on the 20 PVTs. The DSRT thus provides a qualitative, as well as quantitative, evaluation of the effects of a system on speaker recognizability. A fringe benefit of this project is PVT rating data for a sample of 680 voices. [Work partially supported by USAFRL.

  7. Midwifery education and technology enhanced learning: Evaluating online story telling in preregistration midwifery education.

    PubMed

    Scamell, Mandie; Hanley, Thomas

    2018-03-01

    A major issue regarding the implementation of blended learning for preregistration health programmes is the analysis of students' perceptions and attitudes towards their learning. It is the extent of the embedding of Technology Enhanced Learning (TEL) into the higher education curriculum that makes this analysis so vital. This paper reports on the quantitative results of a UK based study that was set up to respond to the apparent disconnect between technology enhanced education provision and reliable student evaluation of this mode of learning. Employing a mixed methods research design, the research described here was carried to develop a reliable and valid evaluation tool to measure acceptability of and satisfaction with a blended learning approach, specifically designed for a preregistration midwifery module offered at level 4. Feasibility testing of 46 completed blended learning evaluation questionnaires - Student Midwife Evaluation of Online Learning Effectiveness (SMEOLE) - using descriptive statistics, reliability and internal consistency tests. Standard deviations and mean scores all followed predicted pattern. Results from the reliability and internal consistency testing confirm the feasibility of SMEOLE as an effective tool for measuring student satisfaction with a blended learning approach to preregistration learning. The analysis presented in this paper suggests that we have been successful in our aim to produce an evaluation tool capable of assessing the quality of technology enhanced, University level learning in Midwifery. This work can provide future benchmarking against which midwifery, and other health, blended learning curriculum planning could be structured and evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  9. Writing Across the Curriculum: Reliability Testing of a Standardized Rubric.

    PubMed

    Minnich, Margo; Kirkpatrick, Amanda J; Goodman, Joely T; Whittaker, Ali; Stanton Chapple, Helen; Schoening, Anne M; Khanna, Maya M

    2018-06-01

    Rubrics positively affect student academic performance; however, accuracy and consistency of the rubric and its use is imperative. The researchers in this study developed a standardized rubric for use across an undergraduate nursing curriculum, then evaluated the interrater reliability and general usability of the tool. Faculty raters graded papers using the standardized rubric, submitted their independent scoring for interrater reliability analyses, then participated in a focus group discussion regarding rubric use experience. Quantitative analysis of the data showed a high interrater reliability (α = .998). Content analysis of transcription revealed several positive themes: Consistency, Emphasis on Writing Ability, and Ability to Use the Rubric as a Teaching Tool. Areas for improvement included use of value words and difficulty with point allocation. Investigators recommend effective faculty orientation for rubric use and future work in developing a rubric to assess reflective writing. [J Nurs Educ. 2018;57(6):366-370.]. Copyright 2018, SLACK Incorporated.

  10. Quality and rigor of the concept mapping methodology: a pooled study analysis.

    PubMed

    Rosas, Scott R; Kane, Mary

    2012-05-01

    The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative characteristics and estimates of quality and rigor that may guide for future studies are lacking. To address this gap, we conducted a pooled analysis of 69 concept mapping studies to describe characteristics across study phases, generate specific indicators of validity and reliability, and examine the relationship between select study characteristics and quality indicators. Individual study characteristics and estimates were pooled and quantitatively summarized, describing the distribution, variation and parameters for each. In addition, variation in the concept mapping data collection in relation to characteristics and estimates was examined. Overall, results suggest concept mapping yields strong internal representational validity and very strong sorting and rating reliability estimates. Validity and reliability were consistently high despite variation in participation and task completion percentages across data collection modes. The implications of these findings as a practical reference to assess the quality and rigor for future concept mapping studies are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  12. The image evaluation of iterative motion correction reconstruction algorithm PROPELLER T2-weighted imaging compared with MultiVane T2-weighted imaging

    NASA Astrophysics Data System (ADS)

    Lee, Suk-Jun; Yu, Seung-Man

    2017-08-01

    The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.

  13. Biomarkers and Surrogate Endpoints in Uveitis: The Impact of Quantitative Imaging.

    PubMed

    Denniston, Alastair K; Keane, Pearse A; Srivastava, Sunil K

    2017-05-01

    Uveitis is a major cause of sight loss across the world. The reliable assessment of intraocular inflammation in uveitis ('disease activity') is essential in order to score disease severity and response to treatment. In this review, we describe how 'quantitative imaging', the approach of using automated analysis and measurement algorithms across both standard and emerging imaging modalities, can develop objective instrument-based measures of disease activity. This is a narrative review based on searches of the current world literature using terms related to quantitative imaging techniques in uveitis, supplemented by clinical trial registry data, and expert knowledge of surrogate endpoints and outcome measures in ophthalmology. Current measures of disease activity are largely based on subjective clinical estimation, and are relatively insensitive, with poor discrimination and reliability. The development of quantitative imaging in uveitis is most established in the use of optical coherence tomographic (OCT) measurement of central macular thickness (CMT) to measure severity of macular edema (ME). The transformative effect of CMT in clinical assessment of patients with ME provides a paradigm for the development and impact of other forms of quantitative imaging. Quantitative imaging approaches are now being developed and validated for other key inflammatory parameters such as anterior chamber cells, vitreous haze, retinovascular leakage, and chorioretinal infiltrates. As new forms of quantitative imaging in uveitis are proposed, the uveitis community will need to evaluate these tools against the current subjective clinical estimates and reach a new consensus for how disease activity in uveitis should be measured. The development, validation, and adoption of sensitive and discriminatory measures of disease activity is an unmet need that has the potential to transform both drug development and routine clinical care for the patient with uveitis.

  14. Quantitative microspectral evaluation of the ratio of arginine-rich to lysine-rich histones in neurons and neuroglial cells.

    PubMed

    Pevzner, L Z; Raygorodskaya, T G; Agroskin, L S

    1978-09-01

    Staining of nervous tissue sections with ammoniacal silver according to Black et al. has been confirmed to be a reliable histochemical colour reaction for quantitative evaluation of arginine-rich and lysine-rich histones in cell structures on the basis of determinations of the position of spectral curve maximum. Neurons of several brain nuclei which differed in predominating neurotransmitter did not differ in the ratio of arginine-rich to lysine-rich histones while some differences in this ratio were found out in the glial satelite cells adjacent to the corresponding neurons of these nuclei. Moderate circadian fluctuations were observed in the arginine-rich to lysine-rich histone ratio, these fluctuations being rather similar in the neurons studied and in the cells of perineuronal neuroglia.

  15. Quantitation of permethylated N-glycans through multiple-reaction monitoring (MRM) LC-MS/MS.

    PubMed

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L; Mechref, Yehia

    2015-04-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple-reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan was determined to be 30%, whereas it was found to be 35% for either fucosylated or sialylated N-glycans. The optimum CE for mannose and complex type N-glycan was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan compositions in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these glycans was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitude. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples.

  16. Quantitation of Permethylated N-Glycans through Multiple-Reaction Monitoring (MRM) LC-MS/MS

    NASA Astrophysics Data System (ADS)

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L.; Mechref, Yehia

    2015-04-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple-reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan was determined to be 30%, whereas it was found to be 35% for either fucosylated or sialylated N-glycans. The optimum CE for mannose and complex type N-glycan was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan compositions in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these glycans was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitude. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples.

  17. Assessment of mesh simplification algorithm quality

    NASA Astrophysics Data System (ADS)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  18. Reliable gene expression analysis by reverse transcription-quantitative PCR: reporting and minimizing the uncertainty in data accuracy.

    PubMed

    Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann

    2014-10-01

    Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.

  19. [Quantitative research on operation behavior of acupuncture manipulation].

    PubMed

    Li, Jing; Grierson, Lawrence; Wu, Mary X; Breuer, Ronny; Carnahan, Heather

    2014-03-01

    To explore a method of quantitative evaluation on operation behavior of acupuncture manipulation and further analyze behavior features of professional acupuncture manipulation. According to acupuncture basic manipulations, Scales for Operation Behavior of Acupuncture Basic Manipulation was made and Delphi method was adopted to test its validity. Two independent estimators utilized this scale to assess operation behavior of acupuncture manipulate among 12 acupuncturists and 12 acupuncture-novices and calculate interrater reliability, also the differences of total score of operation behavior in the two groups as well as single-step score, including sterilization, needle insertion, needle manipulation and needle withdrawal, were compared. The validity of this scale was satisfied. The inter-rater reliability was 0. 768. The total score of operation behavior in acupuncturist group was significantly higher than that in the acupuncture-novice group (13.80 +/- 1.05 vs 11.03 +/- 2.14, P < 0.01). The scores of needle insertion and needle manipulation in the acupuncturist group were significantly higher than those in the acupuncture-novice group (4.28 +/- 0.91 vs 2.54 +/- 1.51, P < 0.01; 2.56 +/- 0.65 vs 1.88 +/- 0.88, P < 0.05); however, the scores of sterilization and needle withdrawal in the acupuncturist group were not different from those in the acupuncture-novice group. This scale is suitable for quantitative evaluation on operation behavior of acupuncture manipulation. The behavior features of professional acupuncture manipulation are mainly presented with needle insertion and needle manipulation which has superior difficulty, high coordination and accuracy.

  20. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  1. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  2. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  3. Subject-level reliability analysis of fast fMRI with application to epilepsy.

    PubMed

    Hao, Yongfu; Khoo, Hui Ming; von Ellenrieder, Nicolas; Gotman, Jean

    2017-07-01

    Recent studies have applied the new magnetic resonance encephalography (MREG) sequence to the study of interictal epileptic discharges (IEDs) in the electroencephalogram (EEG) of epileptic patients. However, there are no criteria to quantitatively evaluate different processing methods, to properly use the new sequence. We evaluated different processing steps of this new sequence under the common generalized linear model (GLM) framework by assessing the reliability of results. A bootstrap sampling technique was first used to generate multiple replicated data sets; a GLM with different processing steps was then applied to obtain activation maps, and the reliability of these maps was assessed. We applied our analysis in an event-related GLM related to IEDs. A higher reliability was achieved by using a GLM with head motion confound regressor with 24 components rather than the usual 6, with an autoregressive model of order 5 and with a canonical hemodynamic response function (HRF) rather than variable latency or patient-specific HRFs. Comparison of activation with IED field also favored the canonical HRF, consistent with the reliability analysis. The reliability analysis helps to optimize the processing methods for this fast fMRI sequence, in a context in which we do not know the ground truth of activation areas. Magn Reson Med 78:370-382, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. [Development of an evaluation instrument for service quality in nursing homes].

    PubMed

    Lee, Jia; Ji, Eun Sun

    2011-08-01

    The purposes of this study were to identify the factors influencing service quality in nursing homes, and to develop an evaluation instrument for service quality. A three-phase process was employed for the study. 1) The important factors to evaluate the service quality in nursing homes were identified through a literature review, panel discussion and focus group interview, 2) the evaluation instrument was developed, and 3) validity and reliability of the study instrument were tested by factor analysis, Pearson correlation coefficient, Cronbach's α and Cohen's Kappa. Factor analysis showed that the factors influencing service quality in nursing homes were healthcare, diet/assistance, therapy, environment and staff. To improve objectivity of the instrument, quantitative as well as qualitative evaluation approaches were adopted. The study instrument was developed with 30 items and showed acceptable construct validity. The criterion-related validity was a Pearson correlation coefficient of .85 in 151 care facilities. The internal consistency was Cronbach's α=.95. The instrument has acceptable validity and a high degree of reliability. Staff in nursing homes can continuously improve and manage their services using the results of the evaluation instrument.

  5. 3D-quantitative structure-activity relationship study for the design of novel enterovirus A71 3C protease inhibitors.

    PubMed

    Nie, Quandeng; Xu, Xiaoyi; Zhang, Qi; Ma, Yuying; Yin, Zheng; Shang, Luqing

    2018-06-07

    A three-dimensional quantitative structure-activity relationships model of enterovirus A71 3C protease inhibitors was constructed in this study. The protein-ligand interaction fingerprint was analyzed to generate a pharmacophore model. A predictive and reliable three-dimensional quantitative structure-activity relationships model was built based on the Flexible Alignment of AutoGPA. Moreover, three novel compounds (I-III) were designed and evaluated for their biochemical activity against 3C protease and anti-enterovirus A71 activity in vitro. III exhibited excellent inhibitory activity (IC 50 =0.031 ± 0.005 μM, EC 50 =0.036 ± 0.007 μM). Thus, this study provides a useful quantitative structure-activity relationships model to develop potent inhibitors for enterovirus A71 3C protease. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. A systematic review of health economic evaluation in adjuvant breast radiotherapy: Quality counted by numbers.

    PubMed

    Monten, Chris; Veldeman, Liv; Verhaeghe, Nick; Lievens, Yolande

    2017-11-01

    Evolving practice in adjuvant breast radiotherapy inevitably impacts healthcare budgets. This is reflected in a rise of health economic evaluations (HEE) in this domain. The available HEE literature was analysed qualitatively and quantitatively, using available instruments. HEEs published between 1/1/2000 and 31/10/2016 were retrieved through a systematic search in Medline, Cochrane and Embase. A quality-assessment using CHEERS (Consolidated Health Economic Evaluation Reporting Standards) was translated into a quantitative score and compared with Tufts Medical Centre CEA registry and Quality of Health Economic Studies (QHES) results. Twenty cost-effectiveness analyses (CEA) and thirteen cost comparisons (CC) were analysed. In qualitative evaluation, valuation or justification of data sources, population heterogeneity and discussion on generalizability, in addition to declaration on funding, were often absent or incomplete. After quantification, the average CHEERS-scores were 74% (CI 66.9-81.1%) and 75.6% (CI 70.7-80.5%) for CEAs and CCs respectively. CEA-scores did not differ significantly from Tufts and QHES-scores. Quantitative CHEERS evaluation is feasible and yields comparable results to validated instruments. HEE in adjuvant breast radiotherapy is of acceptable quality, however, further efforts are needed to improve comprehensive reporting of all data, indispensable for assessing relevance, reliability and generalizability of results. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    PubMed

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  8. Task-oriented evaluation of electronic medical records systems: development and validation of a questionnaire for physicians

    PubMed Central

    2004-01-01

    Background Evaluation is a challenging but necessary part of the development cycle of clinical information systems like the electronic medical records (EMR) system. It is believed that such evaluations should include multiple perspectives, be comparative and employ both qualitative and quantitative methods. Self-administered questionnaires are frequently used as a quantitative evaluation method in medical informatics, but very few validated questionnaires address clinical use of EMR systems. Methods We have developed a task-oriented questionnaire for evaluating EMR systems from the clinician's perspective. The key feature of the questionnaire is a list of 24 general clinical tasks. It is applicable to physicians of most specialties and covers essential parts of their information-oriented work. The task list appears in two separate sections, about EMR use and task performance using the EMR, respectively. By combining these sections, the evaluator may estimate the potential impact of the EMR system on health care delivery. The results may also be compared across time, site or vendor. This paper describes the development, performance and validation of the questionnaire. Its performance is shown in two demonstration studies (n = 219 and 80). Its content is validated in an interview study (n = 10), and its reliability is investigated in a test-retest study (n = 37) and a scaling study (n = 31). Results In the interviews, the physicians found the general clinical tasks in the questionnaire relevant and comprehensible. The tasks were interpreted concordant to their definitions. However, the physicians found questions about tasks not explicitly or only partially supported by the EMR systems difficult to answer. The two demonstration studies provided unambiguous results and low percentages of missing responses. In addition, criterion validity was demonstrated for a majority of task-oriented questions. Their test-retest reliability was generally high, and the non-standard scale was found symmetric and ordinal. Conclusion This questionnaire is relevant for clinical work and EMR systems, provides reliable and interpretable results, and may be used as part of any evaluation effort involving the clinician's perspective of an EMR system. PMID:15018620

  9. Task-oriented evaluation of electronic medical records systems: development and validation of a questionnaire for physicians.

    PubMed

    Laerum, Hallvard; Faxvaag, Arild

    2004-02-09

    Evaluation is a challenging but necessary part of the development cycle of clinical information systems like the electronic medical records (EMR) system. It is believed that such evaluations should include multiple perspectives, be comparative and employ both qualitative and quantitative methods. Self-administered questionnaires are frequently used as a quantitative evaluation method in medical informatics, but very few validated questionnaires address clinical use of EMR systems. We have developed a task-oriented questionnaire for evaluating EMR systems from the clinician's perspective. The key feature of the questionnaire is a list of 24 general clinical tasks. It is applicable to physicians of most specialties and covers essential parts of their information-oriented work. The task list appears in two separate sections, about EMR use and task performance using the EMR, respectively. By combining these sections, the evaluator may estimate the potential impact of the EMR system on health care delivery. The results may also be compared across time, site or vendor. This paper describes the development, performance and validation of the questionnaire. Its performance is shown in two demonstration studies (n = 219 and 80). Its content is validated in an interview study (n = 10), and its reliability is investigated in a test-retest study (n = 37) and a scaling study (n = 31). In the interviews, the physicians found the general clinical tasks in the questionnaire relevant and comprehensible. The tasks were interpreted concordant to their definitions. However, the physicians found questions about tasks not explicitly or only partially supported by the EMR systems difficult to answer. The two demonstration studies provided unambiguous results and low percentages of missing responses. In addition, criterion validity was demonstrated for a majority of task-oriented questions. Their test-retest reliability was generally high, and the non-standard scale was found symmetric and ordinal. This questionnaire is relevant for clinical work and EMR systems, provides reliable and interpretable results, and may be used as part of any evaluation effort involving the clinician's perspective of an EMR system.

  10. Utility of Gene Expression and Ex vivo Steroid Production in a 96 h Assay for Predicting Impacts of Endocrine Active Chemicals on Fish Reproduction.

    EPA Science Inventory

    Development of efficient test methods that can generate reliable data to inform risk assessment is an on-going challenge in the field of ecotoxicology. In the present study we evaluated whether a 96 h in vivo assay focused on a small number of quantitative real-time polymerase ch...

  11. Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    PubMed Central

    Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong

    2011-01-01

    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285

  12. The use of a tracking test battery in the quantitative evaluation of neurological function

    NASA Technical Reports Server (NTRS)

    Repa, B. S.

    1973-01-01

    A number of tracking tasks that have proven useful to control engineers and psychologists measuring skilled performance have been evaluated for clinical use. Normal subjects as well as patients with previous diagnoses of Parkinson's disease, multiple sclerosis, and cerebral palsy were used in the evaluation. The tests that were studied included step tracking, random tracking, and critical tracking. The results of the present experiments encourage the continued use of tracking tasks as assessment precedures in a clinical environment. They have proven to be reliable, valid, and sensitive measures of neurological function.

  13. Assessment of and standardization for quantitative nondestructive test

    NASA Technical Reports Server (NTRS)

    Neuschaefer, R. W.; Beal, J. B.

    1972-01-01

    Present capabilities and limitations of nondestructive testing (NDT) as applied to aerospace structures during design, development, production, and operational phases are assessed. It will help determine what useful structural quantitative and qualitative data may be provided from raw materials to vehicle refurbishment. This assessment considers metal alloys systems and bonded composites presently applied in active NASA programs or strong contenders for future use. Quantitative and qualitative data has been summarized from recent literature, and in-house information, and presented along with a description of those structures or standards where the information was obtained. Examples, in tabular form, of NDT technique capabilities and limitations have been provided. NDT techniques discussed and assessed were radiography, ultrasonics, penetrants, thermal, acoustic, and electromagnetic. Quantitative data is sparse; therefore, obtaining statistically reliable flaw detection data must be strongly emphasized. The new requirements for reusable space vehicles have resulted in highly efficient design concepts operating in severe environments. This increases the need for quantitative NDT evaluation of selected structural components, the end item structure, and during refurbishment operations.

  14. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Solubilities and Octanol-Water Partition Coefficients of Hydrophobic Substances,” Journal of Research of the...

  15. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Solubilities and Octanol-Water Partition Coefficients of Hydrophobic Substances,” Journal of Research of the...

  16. Two-dimensional digital photography for child body posture evaluation: standardized technique, reliable parameters and normative data for age 7-10 years.

    PubMed

    Stolinski, L; Kozinoga, M; Czaprowski, D; Tyrakowski, M; Cerny, P; Suzuki, N; Kotwicki, T

    2017-01-01

    Digital photogrammetry provides measurements of body angles or distances which allow for quantitative posture assessment with or without the use of external markers. It is becoming an increasingly popular tool for the assessment of the musculoskeletal system. The aim of this paper is to present a structured method for the analysis of posture and its changes using a standardized digital photography technique. The purpose of the study was twofold. The first one comprised 91 children (44 girls and 47 boys) aged 7-10 (8.2 ± 1.0), i.e., students of primary school, and its aim was to develop the photographic method, choose the quantitative parameters, and determine the intraobserver reliability (repeatability) along with the interobserver reliability (reproducibility) measurements in sagittal plane using digital photography, as well as to compare the Rippstein plurimeter and digital photography measurements. The second one involved 7782 children (3804 girls, 3978 boys) aged 7-10 (8.4 ± 0.5), who underwent digital photography postural screening. The methods consisted in measuring and calculating selected parameters, establishing the normal ranges of photographic parameters, presenting percentile charts, as well as noticing common pitfalls and possible sources of errors in digital photography. A standardized procedure for the photographic evaluation of child body posture was presented. The photographic measurements revealed very good intra- and inter-rater reliability regarding the five sagittal parameters and good reliability performed against Rippstein plurimeter measurements. The parameters displayed insignificant variability over time. Normative data were calculated based on photographic assessment, while the percentile charts were provided to serve as reference values. The technical errors observed during photogrammetry are carefully discussed in this article. Technical developments are allowed for the regular use of digital photogrammetry in body posture assessment. Specific child positioning (described above) enables us to avoid incidentally modified posture. Image registration is simple, quick, harmless, and cost-effective. The semi-automatic image analysis, together with the normal values and percentile charts, makes the technique reliable in terms of child's posture documentation and corrective therapy effects' monitoring.

  17. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  18. Competitive RT-PCR Strategy for Quantitative Evaluation of the Expression of Tilapia (Oreochromis niloticus) Growth Hormone Receptor Type I

    PubMed Central

    2009-01-01

    Quantization of gene expression requires that an accurate measurement of a specific transcript is made. In this paper, a quantitative reverse transcription-polymerase chain reaction (RT-PCR) by competition for tilapia growth hormone receptor type I is designed and validated. This experimental procedure was used to determine the abundance of growth hormone receptor type I transcript in different tilapia tissues. The results obtained with this developed competitive RT-PCR were similar to real-time PCR results reported recently. This protocol provides a reliable alternative, but less expensive than real-time PCR to quantify specific genes. PMID:19495916

  19. Development and psychometric evaluation of the urgency questionnaire for evaluating severity and health-related quality of life impact of urinary urgency in overactive bladder.

    PubMed

    Coyne, Karin S; Sexton, Chris C; Thompson, Christine; Bavendam, Tamara; Brubaker, Linda

    2015-03-01

    Urinary urgency is the cardinal symptom of overactive bladder (OAB). However, there is no single instrument that assesses the context, severity, intensity, and daily life impact of urinary urgency. The purpose of this manuscript is to describe the methods and results of the qualitative and quantitative research conducted to develop a new tool for this purpose, the Urgency Questionnaire (UQ). Qualitative data from interviews with patients with urinary urgency were used to develop and refine the items and response options of the UQ. Three studies were used to evaluate psychometric properties: a clinical trial of tolterodine (Detrol; n = 974); a psychometric validation study (n = 163); and a test-retest validation study (n = 47). Item and exploratory factor analysis (EFA) were performed to assess the subscale structure, and the psychometric performance of the resulting scales was evaluated. Fifteen Likert-scale items and four VAS questions were retained. A four-factor solution was shown to best fit the data, with the subscales: Impact on Daily Activities, Time to Control Urgency, Nocturia, and Fear of Incontinence. All subscales and VAS items demonstrated good reliability (Cronbach's α 0.79-0.94), convergent and discriminant validity, and responsiveness to change. The UQ differentiated between OAB patients and controls. The results provide quantitative evidence that urinary urgency, as assessed by the UQ, is a pathological sensation distinctive from the normal urge to void and suggest that the UQ might be a reliable, valid, and responsive instrument for evaluating the severity and HRQL impact of urinary urgency in OAB.

  20. Quantitation of Permethylated N-Glycans through Multiple-Reaction Monitoring (MRM) LC-MS/MS

    PubMed Central

    Zhou, Shiyue; Hu, Yunli; DeSantos-Garcia, Janie L.; Mechref, Yehia

    2015-01-01

    The important biological roles of glycans and their implications in disease development and progression have created a demand for the development of sensitive quantitative glycomics methods. Quantitation of glycans existing at low abundance is still analytically challenging. In this study, an N-linked glycans quantitation method using multiple reaction monitoring (MRM) on a triple quadrupole instrument was developed. Optimum normalized collision energy (CE) for both sialylated and fucosylated N-glycan structures was determined to be 30% while it was found to be 35% for either fucosylated or sialylated structures The optimum CE for mannose and complex type N-glycan structures was determined to be 35%. Additionally, the use of three transitions was shown to facilitate reliable quantitation. A total of 88 N-glycan structures in human blood serum were quantified using this MRM approach. Reliable detection and quantitation of these structures was achieved when the equivalence of 0.005 μL of blood serum was analyzed. Accordingly, N-glycans down to the 100th of a μL level can be reliably quantified in pooled human blood serum, spanning a dynamic concentration range of three orders of magnitudes. MRM was also effectively utilized to quantitatively compare the expression of N-glycans derived from brain-targeting breast carcinoma cells (MDA-MB-231BR) and metastatic breast cancer cells (MDA-MB-231). Thus, the described MRM method of permethylated N-glycan structures enables a rapid and reliable identification and quantitation of glycans derived from glycoproteins purified or present in complex biological samples. PMID:25698222

  1. Reliability on intra-laboratory and inter-laboratory data of hair mineral analysis comparing with blood analysis.

    PubMed

    Namkoong, Sun; Hong, Seung Phil; Kim, Myung Hwa; Park, Byung Cheol

    2013-02-01

    Nowadays, although its clinical value remains controversial institutions utilize hair mineral analysis. Arguments about the reliability of hair mineral analysis persist, and there have been evaluations of commercial laboratories performing hair mineral analysis. The objective of this study was to assess the reliability of intra-laboratory and inter-laboratory data at three commercial laboratories conducting hair mineral analysis, compared to serum mineral analysis. Two divided hair samples taken from near the scalp were submitted for analysis at the same time, to all laboratories, from one healthy volunteer. Each laboratory sent a report consisting of quantitative results and their interpretation of health implications. Differences among intra-laboratory and interlaboratory data were analyzed using SPSS version 12.0 (SPSS Inc., USA). All the laboratories used identical methods for quantitative analysis, and they generated consistent numerical results according to Friedman analysis of variance. However, the normal reference ranges of each laboratory varied. As such, each laboratory interpreted the patient's health differently. On intra-laboratory data, Wilcoxon analysis suggested they generated relatively coherent data, but laboratory B could not in one element, so its reliability was doubtful. In comparison with the blood test, laboratory C generated identical results, but not laboratory A and B. Hair mineral analysis has its limitations, considering the reliability of inter and intra laboratory analysis comparing with blood analysis. As such, clinicians should be cautious when applying hair mineral analysis as an ancillary tool. Each laboratory included in this study requires continuous refinement from now on for inducing standardized normal reference levels.

  2. Development and evaluation of a study design typology for human research.

    PubMed

    Carini, Simona; Pollock, Brad H; Lehmann, Harold P; Bakken, Suzanne; Barbour, Edward M; Gabriel, Davera; Hagler, Herbert K; Harper, Caryn R; Mollah, Shamim A; Nahm, Meredith; Nguyen, Hien H; Scheuermann, Richard H; Sim, Ida

    2009-11-14

    A systematic classification of study designs would be useful for researchers, systematic reviewers, readers, and research administrators, among others. As part of the Human Studies Database Project, we developed the Study Design Typology to standardize the classification of study designs in human research. We then performed a multiple observer masked evaluation of active research protocols in four institutions according to a standardized protocol. Thirty-five protocols were classified by three reviewers each into one of nine high-level study designs for interventional and observational research (e.g., N-of-1, Parallel Group, Case Crossover). Rater classification agreement was moderately high for the 35 protocols (Fleiss' kappa = 0.442) and higher still for the 23 quantitative studies (Fleiss' kappa = 0.463). We conclude that our typology shows initial promise for reliably distinguishing study design types for quantitative human research.

  3. Influence of sample preparation and reliability of automated numerical refocusing in stain-free analysis of dissected tissues with quantitative phase digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Kemper, Björn; Lenz, Philipp; Bettenworth, Dominik; Krausewitz, Philipp; Domagk, Dirk; Ketelhut, Steffi

    2015-05-01

    Digital holographic microscopy (DHM) has been demonstrated to be a versatile tool for high resolution non-destructive quantitative phase imaging of surfaces and multi-modal minimally-invasive monitoring of living cell cultures in-vitro. DHM provides quantitative monitoring of physiological processes through functional imaging and structural analysis which, for example, gives new insight into signalling of cellular water permeability and cell morphology changes due to toxins and infections. Also the analysis of dissected tissues quantitative DHM phase contrast prospects application fields by stain-free imaging and the quantification of tissue density changes. We show that DHM allows imaging of different tissue layers with high contrast in unstained tissue sections. As the investigation of fixed samples represents a very important application field in pathology, we also analyzed the influence of the sample preparation. The retrieved data demonstrate that the quality of quantitative DHM phase images of dissected tissues depends strongly on the fixing method and common staining agents. As in DHM the reconstruction is performed numerically, multi-focus imaging is achieved from a single digital hologram. Thus, we evaluated the automated refocussing feature of DHM for application on different types of dissected tissues and revealed that on moderately stained samples highly reproducible holographic autofocussing can be achieved. Finally, it is demonstrated that alterations of the spatial refractive index distribution in murine and human tissue samples represent a reliable absolute parameter that is related of different degrees of inflammation in experimental colitis and Crohn's disease. This paves the way towards the usage of DHM in digital pathology for automated histological examinations and further studies to elucidate the translational potential of quantitative phase microscopy for the clinical management of patients, e.g., with inflammatory bowel disease.

  4. Test-retest and interobserver reliability of quantitative sensory testing according to the protocol of the German Research Network on Neuropathic Pain (DFNS): a multi-centre study.

    PubMed

    Geber, Christian; Klein, Thomas; Azad, Shahnaz; Birklein, Frank; Gierthmühlen, Janne; Huge, Volker; Lauchart, Meike; Nitzsche, Dorothee; Stengel, Maike; Valet, Michael; Baron, Ralf; Maier, Christoph; Tölle, Thomas; Treede, Rolf-Detlef

    2011-03-01

    Quantitative sensory testing (QST) is an instrument to assess positive and negative sensory signs, helping to identify mechanisms underlying pathologic pain conditions. In this study, we evaluated the test-retest reliability (TR-R) and the interobserver reliability (IO-R) of QST in patients with sensory disturbances of different etiologies. In 4 centres, 60 patients (37 male and 23 female, 56.4±1.9years) with lesions or diseases of the somatosensory system were included. QST comprised 13 parameters including detection and pain thresholds for thermal and mechanical stimuli. QST was performed in the clinically most affected test area and a less or unaffected control area in a morning and an afternoon session on 2 consecutive days by examiner pairs (4 QSTs/patient). For both, TR-R and IO-R, there were high correlations (r=0.80-0.93) at the affected test area, except for wind-up ratio (TR-R: r=0.67; IO-R: r=0.56) and paradoxical heat sensations (TR-R: r=0.35; IO-R: r=0.44). Mean IO-R (r=0.83, 31% unexplained variance) was slightly lower than TR-R (r=0.86, 26% unexplained variance, P<.05); the difference in variance amounted to 5%. There were no differences between study centres. In a subgroup with an unaffected control area (n=43), reliabilities were significantly better in the test area (TR-R: r=0.86; IO-R: r=0.83) than in the control area (TR-R: r=0.79; IO-R: r=0.71, each P<.01), suggesting that disease-related systematic variance enhances reliability of QST. We conclude that standardized QST performed by trained examiners is a valuable diagnostic instrument with good test-retest and interobserver reliability within 2days. With standardized training, observer bias is much lower than random variance. Quantitative sensory testing performed by trained examiners is a valuable diagnostic instrument with good interobserver and test-retest reliability for use in patients with sensory disturbances of different etiologies to help identify mechanisms of neuropathic and non-neuropathic pain. Copyright © 2010 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  5. Impact of Oriented Clay Particles on X-Ray Spectroscopy Analysis

    NASA Astrophysics Data System (ADS)

    Lim, A. J. M. S.; Syazwani, R. N.; Wijeyesekera, D. C.

    2016-07-01

    Understanding the engineering properties of the mineralogy and microfabic of clayey soils is very complex and thus very difficult for soil characterization. Micromechanics of soils recognize that the micro structure and mineralogy of clay have a significant influence on its engineering behaviour. To achieve a more reliable quantitative evaluation of clay mineralogy, a proper sample preparation technique for quantitative clay mineral analysis is necessary. This paper presents the quantitative evaluation of elemental analysis and chemical characterization of oriented and random oriented clay particles using X-ray spectroscopy. Three different types of clays namely marine clay, bentonite and kaolin clay were studied. The oriented samples were prepared by placing the dispersed clay in water and left to settle on porous ceramic tiles by applying a relatively weak suction through a vacuum pump. Images form a Scanning Electron Microscope (SEM) was also used to show the comparison between the orientation patterns of both the sample preparation techniques. From the quantitative analysis of the X-ray spectroscopy, oriented sampling method showed more accuracy in identifying mineral deposits, because it produced better peak intensity on the spectrum and more mineral content can be identified compared to randomly oriented samples.

  6. Quantification of EEG reactivity in comatose patients

    PubMed Central

    Hermans, Mathilde C.; Westover, M. Brandon; van Putten, Michel J.A.M.; Hirsch, Lawrence J.; Gaspard, Nicolas

    2016-01-01

    Objective EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. Methods In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. Results The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet’s AC1: 65–70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts’ agreement regarding reactivity for each individual case. Conclusion Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Significance Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. PMID:26183757

  7. A pilot rating scale for evaluating failure transients in electronic flight control systems

    NASA Technical Reports Server (NTRS)

    Hindson, William S.; Schroeder, Jeffery A.; Eshow, Michelle M.

    1990-01-01

    A pilot rating scale was developed to describe the effects of transients in helicopter flight-control systems on safety-of-flight and on pilot recovery action. The scale was applied to the evaluation of hardovers that could potentially occur in the digital flight-control system being designed for a variable-stability UH-60A research helicopter. Tests were conducted in a large moving-base simulator and in flight. The results of the investigation were combined with existing airworthiness criteria to determine quantitative reliability design goals for the control system.

  8. Repeated-measure validation of craniofacial metrics from three-dimensional surface scans: application to medical genetics

    NASA Astrophysics Data System (ADS)

    Lauer, Eric A.; Corner, Brian D.; Li, Peng; Beecher, Robert M.; Deutsch, Curtis

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  9. Factor Structure, Reliability and Criterion Validity of the Autism-Spectrum Quotient (AQ): A Study in Dutch Population and Patient Groups

    PubMed Central

    Bartels, Meike; Cath, Danielle C.; Boomsma, Dorret I.

    2008-01-01

    The factor structure of the Dutch translation of the Autism-Spectrum Quotient (AQ; a continuous, quantitative measure of autistic traits) was evaluated with confirmatory factor analyses in a large general population and student sample. The criterion validity of the AQ was examined in three matched patient groups (autism spectrum conditions (ASC), social anxiety disorder, and obsessive–compulsive disorder). A two factor model, consisting of a “Social interaction” factor and “Attention to detail” factor could be identified. The internal consistency and test–retest reliability of the AQ were satisfactory. High total AQ and factor scores were specific to ASC patients. Men scored higher than women and science students higher than non-science students. The Dutch translation of the AQ is a reliable instrument to assess autism spectrum conditions. PMID:18302013

  10. Quantitative evaluation of fatty degeneration of the supraspinatus and infraspinatus muscles using T2 mapping.

    PubMed

    Matsuki, Keisuke; Watanabe, Atsuya; Ochiai, Shunsuke; Kenmoku, Tomonori; Ochiai, Nobuyasu; Obata, Takayuki; Toyone, Tomoaki; Wada, Yuichi; Okubo, Toshiyuki

    2014-05-01

    Although fatty degeneration of the rotator cuff muscles has been reported to affect the outcomes of rotator cuff repairs, only a few studies have attempted to quantitatively evaluate this degeneration. T2 mapping is a quantitative magnetic resonance imaging technique that potentially evaluates the concentration of fat in muscles. The purpose of this study was to investigate fatty degeneration of the rotator cuff muscles by using T2 mapping, as well as to evaluate the reliability of T2 measurement. We obtained magnetic resonance images including T2 mapping from 184 shoulders (180 patients; 110 male patients [112 shoulders] and 70 female patients [72 shoulders]; mean age, 62 years [range, 16-84 years]). Eighty-three shoulders had no rotator cuff tear (group A), whereas 101 shoulders had tears, of which 62 were incomplete to medium (group B) and 39 were large to massive (group C). T2 values of the supraspinatus and infraspinatus muscles were measured and compared among groups. Intraobserver and interobserver variabilities also were examined. The mean T2 values of the supraspinatus in groups A, B, and C were 36.3 ± 4.7 milliseconds, 44.2 ± 11.3 milliseconds, and 57.0 ± 18.8 milliseconds, respectively. The mean T2 values of the infraspinatus in groups A, B, and C were 36.1 ± 5.1 milliseconds, 40.0 ± 11.1 milliseconds, and 51.9 ± 18.2 milliseconds, respectively. The T2 value significantly increased with the extent of the tear in both muscles. Both intraobserver and interobserver variabilities were more than 0.99. T2 mapping can be a reliable tool to quantify fatty degeneration of the rotator cuff muscles. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  11. Smile line assessment comparing quantitative measurement and visual estimation.

    PubMed

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  12. Quantitative Evaluation of Performance during Robot-assisted Treatment.

    PubMed

    Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.

  13. Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group rating

    NASA Astrophysics Data System (ADS)

    Hafner, John C.; Hafner, Patti M.

    2003-12-01

    Although the rubric has emerged as one of the most popular assessment tools in progressive educational programs, there is an unfortunate dearth of information in the literature quantifying the actual effectiveness of the rubric as an assessment tool in the hands of the students. This study focuses on the validity and reliability of the rubric as an assessment tool for student peer-group evaluation in an effort to further explore the use and effectiveness of the rubric. A total of 1577 peer-group ratings using a rubric for an oral presentation was used in this 3-year study involving 107 college biology students. A quantitative analysis of the rubric used in this study shows that it is used consistently by both students and the instructor across the study years. Moreover, the rubric appears to be 'gender neutral' and the students' academic strength has no significant bearing on the way that they employ the rubric. A significant, one-to-one relationship (slope = 1.0) between the instructor's assessment and the students' rating is seen across all years using the rubric. A generalizability study yields estimates of inter-rater reliability of moderate values across all years and allows for the estimation of variance components. Taken together, these data indicate that the general form and evaluative criteria of the rubric are clear and that the rubric is a useful assessment tool for peer-group (and self-) assessment by students. To our knowledge, these data provide the first statistical documentation of the validity and reliability of the rubric for student peer-group assessment.

  14. Qualitative and Semiquantitative Assessment of Exposure to Engineered Nanomaterials within the French EpiNano Program: Inter- and Intramethod Reliability Study.

    PubMed

    Guseva Canu, Irina; Jezewski-Serra, Delphine; Delabre, Laurène; Ducamp, Stéphane; Iwatsubo, Yuriko; Audignon-Durand, Sabine; Ducros, Cécile; Radauceanu, Anca; Durand, Catherine; Witschger, Olivier; Flahaut, Emmanuel

    2017-01-01

    The relatively recent development of industries working with nanomaterials has created challenges for exposure assessment. In this article, we propose a relatively simple approach to assessing nanomaterial exposures for the purposes of epidemiological studies of workers in these industries. This method consists of an onsite industrial hygiene visit of facilities carried out individually and a description of workstations where nano-objects and their agglomerates and aggregates (NOAA) are present using a standardized tool, the Onsite technical logbook. To assess its reliability, we implemented this approach for assessing exposure to NOAA in workplaces at seven workstations which synthesize and functionalize carbon nanotubes. The prediction of exposure to NOAA using this method exhibited substantial agreement with that of the reference method, the latter being based on an onsite group visit, an expert's report and exposure measurements (Cohen kappa = 0.70, sensitivity = 0.88, specificity = 0.92). Intramethod comparison of results for exposure prediction showed moderate agreement between the three evaluators (two program team evaluators and one external evaluator) (weighted Fleiss kappa = 0.60, P = 0.003). Interevaluator reliability of the semiquantitative exposure characterization results was excellent between the two evaluators from the program team (Spearman rho = 0.93, P = 0.03) and fair when these two evaluators' results were compared with the external evaluator's results. The project was undertaken within the framework of the French epidemiological surveillance program EpiNano. This study allowed a first reliability assessment of the EpiNano method. However, to further validate this method a comparison with robust quantitative exposure measurement data is necessary. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  15. NASA trend analysis procedures

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.

  16. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    PubMed

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart

    2017-03-01

    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  17. Psychometric properties of an instrument to measure nursing students' quality of life.

    PubMed

    Chu, Yanxiang; Xu, Min; Li, Xiuyun

    2015-07-01

    It is important for clinical nursing teachers and managers to recognize the importance of nursing students' quality of life (QOL) since they are the source of future nurses. As yet, there is no quality of life evaluation scale (QOLES) specific to them. This study designed a quantitative instrument for evaluating QOL of nursing students. The study design was a descriptive survey with mixed methods including literature review, panel discussion, Delphi method, and statistical analysis. The data were collected from 880 nursing students from four teaching hospitals in Wuhan, China. The reliability and validity of the scale were tested through completion of the QOLES in a cluster sampling method. The total scale included 18 items in three domains: physical, psychological, and social functional. The cumulative contributing rate of the three common factors was 65.23%. Cronbach's alpha coefficient of the scale was 0.82. This scale had good reliability and validity to evaluate nursing students' QOL. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The relationship between quantitative measures of disc height and disc signal intensity with Pfirrmann score of disc degeneration.

    PubMed

    Salamat, Sara; Hutchings, John; Kwong, Clemens; Magnussen, John; Hancock, Mark J

    2016-01-01

    To assess the relationship between quantitative measures of disc height and signal intensity with the Pfirrmann disc degeneration scoring system and to test the inter-rater reliability of the quantitative measures. Participants were 76 people who had recently recovered from their last episode of acute low back pain and underwent MRI scan on a single 3T machine. At all 380 lumbar discs, quantitative measures of disc height and signal intensity were made by 2 independent raters and compared to Pfirrmann scores from a single radiologist. For quantitative measures of disc height and signal intensity a "raw" score and 2 adjusted ratios were calculated and the relationship with Pfirrmann scores was assessed. The inter-tester reliability of quantitative measures was also investigated. There was a strong linear relationship between quantitative disc signal intensity and Pfirrmann scores for grades 1-4, but not for grades 4 and 5. For disc height only, Pfirrmann grade 5 had significantly reduced disc height compared to all other grades. Results were similar regardless of whether raw or adjusted scores were used. Inter-rater reliability for the quantitative measures was excellent (ICC > 0.97). Quantitative measures of disc signal intensity were strongly related to Pfirrmann scores from grade 1 to 4; however disc height only differentiated between grade 4 and 5 Pfirrmann scores. Using adjusted ratios for quantitative measures of disc height or signal intensity did not significantly alter the relationship with Pfirrmann scores.

  19. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  20. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Research of the National Bureau of Standards, 86:361-366 (1981). (7) Fujita, T. et al. “A New Substituent...

  1. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Research of the National Bureau of Standards, 86:361-366 (1981). (7) Fujita, T. et al. “A New Substituent...

  2. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... method, or any other reliable quantitative procedure must be used for those compounds that do not absorb... any other reliable quantitative method, aqueous solutions from the generator column enter a collecting... Research of the National Bureau of Standards, 86:361-366 (1981). (7) Fujita, T. et al. “A New Substituent...

  3. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck

    PubMed Central

    Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D

    2012-01-01

    Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084

  4. Evaluating 'good governance': The development of a quantitative tool in the Greater Serengeti Ecosystem.

    PubMed

    Kisingo, Alex; Rollins, Rick; Murray, Grant; Dearden, Phil; Clarke, Marlea

    2016-10-01

    Protected areas (PAs) can provide important benefits to conservation and to communities. A key factor in the effective delivery of these benefits is the role of governance. There has been a growth in research developing frameworks to evaluate 'good' PA governance, usually drawing on a set of principles that are associated with groups of indicators. In contrast to dominant qualitative approaches, this paper describes the development of a quantitative method for measuring effectiveness of protected area governance, as perceived by stakeholders in the Greater Serengeti Ecosystem in Tanzania. The research developed a quantitative method for developing effectiveness measures of PA governance, using a set of 65 statements related to governance principles developed from a literature review. The instrument was administered to 389 individuals from communities located near PAs in the Greater Serengeti Ecosystem. The results of a factor analysis suggest that statements load onto 10 factors that demonstrate high psychometric validity as measured by factor loadings, explained variance, and Cronbach's alpha reliability. The ten common factors that were extracted were: 1) legitimacy, 2) transparency and accountability, 3) responsiveness, 4) fairness, 5) participation, 6) ecosystem based management (EBM) and connectivity, 7) resilience, 8) achievements, 9) consensus orientation, and 10) power. The paper concludes that quantitative surveys can be used to evaluate governance of protected areas from a community-level perspective. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Establishment of quality assurance for respiratory-gated radiotherapy using a respiration-simulating phantom and gamma index: Evaluation of accuracy taking into account tumor motion and respiratory cycle

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min

    2013-11-01

    The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.

  6. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  7. Inter-rater agreement in evaluation of disability: systematic review of reproducibility studies

    PubMed Central

    Barth, Jürgen; de Boer, Wout E L; Busse, Jason W; Hoving, Jan L; Kedzia, Sarah; Couban, Rachel; Fischer, Katrin; von Allmen, David Y; Spanjer, Jerry

    2017-01-01

    Objectives To explore agreement among healthcare professionals assessing eligibility for work disability benefits. Design Systematic review and narrative synthesis of reproducibility studies. Data sources Medline, Embase, and PsycINFO searched up to 16 March 2016, without language restrictions, and review of bibliographies of included studies. Eligibility criteria Observational studies investigating reproducibility among healthcare professionals performing disability evaluations using a global rating of working capacity and reporting inter-rater reliability by a statistical measure or descriptively. Studies could be conducted in insurance settings, where decisions on ability to work include normative judgments based on legal considerations, or in research settings, where decisions on ability to work disregard normative considerations.Teams of paired reviewers identified eligible studies, appraised their methodological quality and generalisability, and abstracted results with pretested forms. As heterogeneity of research designs and findings impeded a quantitative analysis, a descriptive synthesis stratified by setting (insurance or research) was performed. Results From 4562 references, 101 full text articles were reviewed. Of these, 16 studies conducted in an insurance setting and seven in a research setting, performed in 12 countries, met the inclusion criteria. Studies in the insurance setting were conducted with medical experts assessing claimants who were actual disability claimants or played by actors, hypothetical cases, or short written scenarios. Conditions were mental (n=6, 38%), musculoskeletal (n=4, 25%), or mixed (n=6, 38%). Applicability of findings from studies conducted in an insurance setting to real life evaluations ranged from generalisable (n=7, 44%) and probably generalisable (n=3, 19%) to probably not generalisable (n=6, 37%). Median inter-rater reliability among experts was 0.45 (range intraclass correlation coefficient 0.86 to κ−0.10). Inter-rater reliability was poor in six studies (37%) and excellent in only two (13%). This contrasts with studies conducted in the research setting, where the median inter-rater reliability was 0.76 (range 0.91-0.53), and 71% (5/7) studies achieved excellent inter-rater reliability. Reliability between assessing professionals was higher when the evaluation was guided by a standardised instrument (23 studies, P=0.006). No such association was detected for subjective or chronic health conditions or the studies’ generalisability to real world evaluation of disability (P=0.46, 0.45, and 0.65, respectively). Conclusions Despite their common use and far reaching consequences for workers claiming disabling injury or illness, research on the reliability of medical evaluations of disability for work is limited and indicates high variation in judgments among assessing professionals. Standardising the evaluation process could improve reliability. Development and testing of instruments and structured approaches to improve reliability in evaluation of disability are urgently needed. PMID:28122727

  8. A quantitative approach to evaluating caring in nursing simulation.

    PubMed

    Eggenberger, Terry L; Keller, Kathryn B; Chase, Susan K; Payne, Linda

    2012-01-01

    This study was designed to test a quantitative method of measuring caring in the simulated environment. Since competency in caring is central to nursing practice, ways of including caring concepts in designing scenarios and in evaluation of performance need to be developed. Coates' Caring Efficacy scales were adapted for simulation and named the Caring Efficacy Scale-Simulation Student Version (CES-SSV) and Caring Efficacy Scale-Simulation Faculty Version (CES-SFV). A correlational study was designed to compare student self-ratings with faculty ratings on caring efficacy during an adult acute simulation experience with traditional and accelerated baccalaureate students in a nursing program grounded in caring theory. Student self-ratings were significantly correlated with objective ratings (r = 0.345, 0.356). Both the CES-SSV and the CES-SFV were found to have excellent internal consistency and significantly correlated interrater reliability. They were useful in measuring caring in the simulated learning environment.

  9. Quality of Death Rates by Race and Hispanic Origin: A Summary of Current Research, 1999. Vital and Health Statistics. Series 2: Data Evaluation and Methods Research. No. 128.

    ERIC Educational Resources Information Center

    National Center for Health Statistics (DHHS/PHS), Hyattsville, MD.

    This report summarizes current knowledge and research on the quality and reliability of death rates by race and Hispanic origin in official mortality statistics of the United States produced by the National Center for Health Statistics (NCHS). It provides a quantitative assessment of bias in death rates by race and Hispanic origin and identifies…

  10. The psychophysiological assessment method for pilot's professional reliability.

    PubMed

    Zhang, L M; Yu, L S; Wang, K N; Jing, B S; Fang, C

    1997-05-01

    Previous research has shown that a pilot's professional reliability depends on two relative factors: the pilot's functional state and the demands of task workload. The Psychophysiological Reserve Capacity (PRC) is defined as a pilot's ability to accomplish additive tasks without reducing the performance of the primary task (flight task). We hypothesized that the PRC was a mirror of the pilot's functional state. The purpose of this study was to probe the psychophysiological method for evaluating a pilot's professional reliability on a simulator. The PRC Comprehensive Evaluating System (PRCCES) which was used in the experiment included four subsystems: a) quantitative evaluation system for pilot's performance on simulator; b) secondary task display and quantitative estimating system; c) multiphysiological data monitoring and statistical system; and d) comprehensive evaluation system for pilot PRC. Two studies were performed. In study one, 63 healthy and 13 hospitalized pilots participated. Each pilot performed a double 180 degrees circuit flight program with and without secondary task (three digit operation). The operator performance, score of secondary task and cost of physiological effort were measured and compared by PRCCES in the two conditions. Then, each pilot's flight skill in training was subjectively scored by instructor pilot ratings. In study two, 7 healthy pilots volunteered to take part in the experiment on the effects of sleep deprivation on pilot's PRC. Each participant had PRC tested pre- and post-8 h sleep deprivation. The results show that the PRC values of a healthy pilot was positively correlated with abilities of flexibility, operating and correcting deviation, attention distribution, and accuracy of instrument flight in the air (r = 0.27-0.40, p < 0.05), and negatively correlated with emotional anxiety in flight (r = -0.40, p < 0.05). The values of PRC in healthy pilots (0.61 +/- 0.17) were significantly higher than that of hospitalized pilots (0.43 +/- 0.15) (p < 0.05). The PRC value after 8 h sleep loss (0.50 +/- 0.17) was significantly lower than those before sleep loss (0.70 +/- 0.15) (p < 0.05). We conclude that a pilot's PRC, which was closely related to flight ability and functional state, could partly represent the pilot's professional reliability. It is worthwhile to further research using a pilot's PRC as a predictor of mental workload in aircraft design.

  11. Detection and semi-quantification of Strongylus vulgaris DNA in equine faeces by real-time quantitative PCR.

    PubMed

    Nielsen, Martin K; Peterson, David S; Monrad, Jesper; Thamsborg, Stig M; Olsen, Susanne N; Kaplan, Ray M

    2008-03-01

    Strongylus vulgaris is an important strongyle nematode with high pathogenic potential infecting horses world-wide. Several decades of intensive anthelmintic use has virtually eliminated clinical disease caused by S. vulgaris, but has also caused high levels of anthelmintic resistance in equine small strongyle (cyathostomin) nematodes. Recommendations aimed at limiting the development of anthelmintic resistance by reducing treatment intensity raises a simultaneous demand for reliable and accurate diagnostic tools for detecting important parasitic pathogens. Presently, the only means available to differentiate among strongyle species in a faecal sample is by identifying individual L3 larvae following a two week coproculture procedure. The aim of the present study is to overcome this diagnostic obstacle by developing a fluorescence-based quantitative PCR assay capable of identifying S. vulgaris eggs in faecal samples from horses. Species-specific primers and a TaqMan probe were designed by alignment of published ribosomal DNA sequences of the second internal transcribed spacer of cyathostomin and Strongylus spp. nematodes. The assay was tested for specificity and optimized using genomic DNA extracted from identified male worms of Strongylus and cyathostomin species. In addition, eggs were collected from adult female worms and used to evaluate the quantitative potential of the assay. Statistically significant linear relationships were found between egg numbers and cycle of threshold (Ct) values. PCR results were unaffected by the presence of cyathostomin DNA in the sample and there was no indication of PCR inhibition by faecal sources. A field evaluation on faecal samples obtained from four Danish horse farms revealed a good agreement with the traditional larval culture (kappa-value=0.78), but with a significantly higher performance of the PCR assay. An association between Ct values and S. vulgaris larval counts was statistically significant. The present assay can reliably and semi-quantitatively detect minute quantities of S. vulgaris eggs in faecal samples.

  12. Translation, reliability, and clinical utility of the Melbourne Assessment 2.

    PubMed

    Gerber, Corinna N; Plebani, Anael; Labruyère, Rob

    2017-10-12

    The aims were to (i) provide a German translation of the Melbourne Assessment 2 (MA2), a quantitative test to measure unilateral upper limb function in children with neurological disabilities and (ii) to evaluate its reliability and aspects of clinical utility. After its translation into German and approval of the back translation by the original authors, the MA2 was performed and videotaped twice with 30 children with neuromotor disorders. For each participant, two raters scored the video of the first test for inter-rater reliability. To determine test-retest reliability, one rater additionally scored the video of the second test while the other rater repeated the scoring of the first video to evaluate intra-rater reliability. Time needed for rater training, test administration, and scoring was recorded. The four subscale scores showed excellent intra-, inter-rater, and test-retest reliability with intraclass correlation coefficients of 0.90-1.00 (95%-confidence intervals 0.78-1.00). Score items revealed substantial to almost perfect intra-rater reliability (weighted kappa k w  = 0.66-1.00) for the more affected side. Score item inter-rater and test-retest reliability of the same extremity were, with one exception, moderate to almost perfect (k w  = 0.42-0.97; k w  = 0.40-0.89). Furthermore, the MA2 was feasible and acceptable for patients and clinicians. The MA2 showed excellent subscale and moderate to almost perfect score item reliability. Implications for Rehabilitation There is a lack of high-quality studies about psychometric properties of upper limb measurement tools in the neuropediatric population. The Melbourne Assessment 2 is a promising tool for reliable measurement of unilateral upper limb movement quality in the neuropediatric population. The Melbourne Assessment 2 is acceptable and practicable to therapists and patients for routine use in clinical care.

  13. Quantification of EEG reactivity in comatose patients.

    PubMed

    Hermans, Mathilde C; Westover, M Brandon; van Putten, Michel J A M; Hirsch, Lawrence J; Gaspard, Nicolas

    2016-01-01

    EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet's AC1: 65-70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts' agreement regarding reactivity for each individual case. Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Quantitative assessment of fatty infiltration and muscle volume of the rotator cuff muscles using 3-dimensional 2-point Dixon magnetic resonance imaging.

    PubMed

    Matsumura, Noboru; Oguro, Sota; Okuda, Shigeo; Jinzaki, Masahiro; Matsumoto, Morio; Nakamura, Masaya; Nagura, Takeo

    2017-10-01

    In patients with rotator cuff tears, muscle degeneration is known to be a predictor of irreparable tears and poor outcomes after surgical repair. Fatty infiltration and volume of the whole muscles constituting the rotator cuff were quantitatively assessed using 3-dimensional 2-point Dixon magnetic resonance imaging. Ten shoulders with a partial-thickness tear, 10 shoulders with an isolated supraspinatus tear, and 10 shoulders with a massive tear involving supraspinatus and infraspinatus were compared with 10 control shoulders after matching age and sex. With segmentation of muscle boundaries, the fat fraction value and the volume of the whole rotator cuff muscles were computed. After reliabilities were determined, differences in fat fraction, muscle volume, and fat-free muscle volume were evaluated. Intra-rater and inter-rater reliabilities were regarded as excellent for fat fraction and muscle volume. Tendon rupture adversely increased the fat fraction value of the respective rotator cuff muscle (P < .002). In the massive tear group, muscle volume was significantly decreased in the infraspinatus (P = .035) and increased in the teres minor (P = .039). With subtraction of fat volume, a significant decrease of fat-free volume of the supraspinatus muscle became apparent with a massive tear (P = .003). Three-dimensional measurement could evaluate fatty infiltration and muscular volume with excellent reliabilities. The present study showed that chronic rupture of the tendon adversely increases the fat fraction of the respective muscle and indicates that the residual capacity of the rotator cuff muscles might be overestimated in patients with severe fatty infiltration. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. Quantitative estimation of the high-intensity zone in the lumbar spine: comparison between the symptomatic and asymptomatic population.

    PubMed

    Liu, Chao; Cai, Hong-Xin; Zhang, Jian-Feng; Ma, Jian-Jun; Lu, Yin-Jiang; Fan, Shun-Wu

    2014-03-01

    The high-intensity zone (HIZ) on magnetic resonance imaging (MRI) has been studied for more than 20 years, but its diagnostic value in low back pain (LBP) is limited by the high incidence in asymptomatic subjects. Little effort has been made to improve the objective assessment of HIZ. To develop quantitative measurements for HIZ and estimate intra- and interobserver reliability and to clarify different signal intensity of HIZ in patients with or without LBP. A measurement reliability and prospective comparative study. A consecutive series of patients with LBP between June 2010 and May 2011 (group A) and a successive series of asymptomatic controls during the same period (group B). Incidence of HIZ; quantitative measures, including area of disc, area and signal intensity of HIZ, and magnetic resonance imaging index; and intraclass correlation coefficients (ICCs) for intra- and interobserver reliability. On the basis of HIZ criteria, a series of quantitative dimension and signal intensity measures was developed for assessing HIZ. Two experienced spine surgeons traced the region of interest twice within 4 weeks for assessment of the intra- and interobserver reliability. The quantitative variables were compared between groups A and B. There were 72 patients with LBP and 79 asymptomatic controls enrolling in this study. The prevalence of HIZ in group A and group B was 45.8% and 20.2%, respectively. The intraobserver agreement was excellent for the quantitative measures (ICC=0.838-0.977) as well as interobserver reliability (ICC=0.809-0.935). The mean signal of HIZ in group A was significantly brighter than in group B (57.55±14.04% vs. 45.61±7.22%, p=.000). There was no statistical difference of area of disc and HIZ between the two groups. The magnetic resonance imaging index was found to be higher in group A when compared with group B (3.94±1.71 vs. 3.06±1.50), but with a p value of .050. A series of quantitative measurements for HIZ was established and demonstrated excellent intra- and interobserver reliability. The signal intensity of HIZ was different in patients with or without LBP, and significant brighter signal was observed in symptomatic subjects. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  18. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. Evaluating Landscape Options for Corridor Restoration between Giant Panda Reserves

    PubMed Central

    Wang, Fang; McShea, William J.; Wang, Dajun; Li, Sheng; Zhao, Qing; Wang, Hao; Lu, Zhi

    2014-01-01

    The establishment of corridors can offset the negative effects of habitat fragmentation by connecting isolated habitat patches. However, the practical value of corridor planning is minimal if corridor identification is not based on reliable quantitative information about species-environment relationships. An example of this need for quantitative information is planning for giant panda conservation. Although the species has been the focus of intense conservation efforts for decades, most corridor projects remain hypothetical due to the lack of reliable quantitative researches at an appropriate spatial scale. In this paper, we evaluated a framework for giant panda forest corridor planning. We linked our field survey data with satellite imagery, and conducted species occupancy modelling to examine the habitat use of giant panda within the potential corridor area. We then conducted least-cost and circuit models to identify potential paths of dispersal across the landscape, and compared the predicted cost under current conditions and alternative conservation management options considered during corridor planning. We found that due to giant panda's association with areas of low elevation and flat terrain, human infrastructures in the same area have resulted in corridor fragmentation. We then identified areas with high potential to function as movement corridors, and our analysis of alternative conservation scenarios showed that both forest/bamboo restoration and automobile tunnel construction would significantly improve the effectiveness of corridor, while residence relocation would not significantly improve corridor effectiveness in comparison with the current condition. The framework has general value in any conservation activities that anticipate improving habitat connectivity in human modified landscapes. Specifically, our study suggested that, in this landscape, automobile tunnels are the best means to remove current barriers to giant panda movements caused by anthropogenic interferences. PMID:25133757

  20. Evaluating landscape options for corridor restoration between giant panda reserves.

    PubMed

    Wang, Fang; McShea, William J; Wang, Dajun; Li, Sheng; Zhao, Qing; Wang, Hao; Lu, Zhi

    2014-01-01

    The establishment of corridors can offset the negative effects of habitat fragmentation by connecting isolated habitat patches. However, the practical value of corridor planning is minimal if corridor identification is not based on reliable quantitative information about species-environment relationships. An example of this need for quantitative information is planning for giant panda conservation. Although the species has been the focus of intense conservation efforts for decades, most corridor projects remain hypothetical due to the lack of reliable quantitative researches at an appropriate spatial scale. In this paper, we evaluated a framework for giant panda forest corridor planning. We linked our field survey data with satellite imagery, and conducted species occupancy modelling to examine the habitat use of giant panda within the potential corridor area. We then conducted least-cost and circuit models to identify potential paths of dispersal across the landscape, and compared the predicted cost under current conditions and alternative conservation management options considered during corridor planning. We found that due to giant panda's association with areas of low elevation and flat terrain, human infrastructures in the same area have resulted in corridor fragmentation. We then identified areas with high potential to function as movement corridors, and our analysis of alternative conservation scenarios showed that both forest/bamboo restoration and automobile tunnel construction would significantly improve the effectiveness of corridor, while residence relocation would not significantly improve corridor effectiveness in comparison with the current condition. The framework has general value in any conservation activities that anticipate improving habitat connectivity in human modified landscapes. Specifically, our study suggested that, in this landscape, automobile tunnels are the best means to remove current barriers to giant panda movements caused by anthropogenic interferences.

  1. Quantification of rice brown leaf spot through Taqman real-time PCR specific to the unigene encoding Cochliobolus miyabeanus SCYTALONE DEHYDRATASE1 involved in fungal melanin biosynthesis.

    PubMed

    Su'udi, Mukhamad; Park, Jong-Mi; Kang, Woo-Ri; Park, Sang-Ryeol; Hwang, Duk-Ju; Ahn, Il-Pyung

    2012-12-01

    Rice brown leaf spot is a major disease in the rice paddy field. The causal agent Cochliobolus miyabeanus is an ascomycete fungus and a representative necrotrophic pathogen in the investigation of rice-microbe interactions. The aims of this research were to identify a quantitative evaluation method to determine the amount of C. miyabeanus proliferation in planta and determine the method's sensitivity. Real-time polymerase chain reaction (PCR) was employed in combination with the primer pair and Taqman probe specific to CmSCD1, a C. miyabeanus unigene encoding SCYTALONE DEHYDRATASE, which is involved in fungal melanin biosynthesis. Comparative analysis of the nucleotide sequences of CmSCD1 from Korean strains with those from the Japanese and Taiwanese strains revealed some sequence differences. Based on the crossing point (CP) values from Taqman real-time PCR containing a series of increasing concentrations of cloned amplicon or fungal genomic DNA, linear regressions with a high level of reliability (R(2)>0.997) were constructed. This system was able to estimate fungal genomic DNA at the picogram level. The reliability of this equation was further confirmed using DNA samples from both resistant and susceptible cultivars infected with C. miyabeanus. In summary, our quantitative system is a powerful alternative in brown leaf spot forecasting and in the consistent evaluation of disease progression.

  2. Fish acute toxicity syndromes and their use in the QSAR approach to hazard assessment.

    PubMed Central

    McKim, J M; Bradbury, S P; Niemi, G J

    1987-01-01

    Implementation of the Toxic Substances Control Act of 1977 creates the need to reliably establish testing priorities because laboratory resources are limited and the number of industrial chemicals requiring evaluation is overwhelming. The use of quantitative structure activity relationship (QSAR) models as rapid and predictive screening tools to select more potentially hazardous chemicals for in-depth laboratory evaluation has been proposed. Further implementation and refinement of quantitative structure-toxicity relationships in aquatic toxicology and hazard assessment requires the development of a "mode-of-action" database. With such a database, a qualitative structure-activity relationship can be formulated to assign the proper mode of action, and respective QSAR, to a given chemical structure. In this review, the development of fish acute toxicity syndromes (FATS), which are toxic-response sets based on various behavioral and physiological-biochemical measurements, and their projected use in the mode-of-action database are outlined. Using behavioral parameters monitored in the fathead minnow during acute toxicity testing, FATS associated with acetylcholinesterase (AChE) inhibitors and narcotics could be reliably predicted. However, compounds classified as oxidative phosphorylation uncouplers or stimulants could not be resolved. Refinement of this approach by using respiratory-cardiovascular responses in the rainbow trout, enabled FATS associated with AChE inhibitors, convulsants, narcotics, respiratory blockers, respiratory membrane irritants, and uncouplers to be correctly predicted. PMID:3297660

  3. Understanding online health information: Evaluation, tools, and strategies.

    PubMed

    Beaunoyer, Elisabeth; Arsenault, Marianne; Lomanowska, Anna M; Guitton, Matthieu J

    2017-02-01

    Considering the status of the Internet as a prominent source of health information, assessing online health material has become a central issue in patient education. We describe the strategies available to evaluate the characteristics of online health information, including readability, emotional content, understandability, usability. Popular tools used in assessment of readability, emotional content and comprehensibility of online health information were reviewed. Tools designed to evaluate both printed and online material were considered. Readability tools are widely used in online health material evaluation and are highly covariant. Assessment of emotional content of online health-related communications via sentiment analysis tools is becoming more popular. Understandability and usability tools have been developed specifically for health-related material, but each tool has important limitations and has been tested on a limited number of health issues. Despite the availability of numerous assessment tools, their overall reliability differs between readability (high) and understandability (low). Approaches combining multiple assessment tools and involving both quantitative and qualitative observations would optimize assessment strategies. Effective assessment of online health information should rely on mixed strategies combining quantitative and qualitative evaluations. Assessment tools should be selected according to their functional properties and compatibility with target material. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Development and reliability of a preliminary Foot Osteoarthritis Magnetic Resonance Imaging Score

    PubMed Central

    Halstead, Jill; Martín-Hervás, Carmen; Hensor, Elizabeth MA; McGonagle, Dennis; Keenan, Anne-Maree

    2017-01-01

    Objective Foot osteoarthritis (OA) is very common but under-investigated musculoskeletal condition and there is little consensus as to common MRI imaging features. The aim of this study was to develop a preliminary foot OA MRI score (FOAMRIS) and evaluate its reliability. Methods This preliminary semi-quantitative score included the hindfoot, midfoot and metatarsophalangeal joints. Joints were scored for joint space narrowing (JSN, 0-3), osteophytes (0-3), joint effusion-synovitis and bone cysts (present/absent). Erosions and bone marrow lesions (BMLs) were scored (0-3) and BMLs were evaluated adjacent to entheses and at sub-tendon sites (present/absent). Additionally, tenosynovitis was scored (0-3) and midfoot ligament pathology was scored (present/absent). Reliability was evaluated in 15 people with foot pain and MRI-detected OA using 3.0T MRI multi-sequence protocols and assessed using intraclass correlation coefficients (ICC) as an overall score and per anatomical site (see supplementary data). Results Intra-reader agreement (ICC) was generally good to excellent across the foot in joint features (JSN 0.94, osteophytes 0.94, effusion-synovitis 0.62 and cysts 0.93), bone features (BML 0.89, erosion 0.78, BML-entheses 0.79, BML sub-tendon 0.75) and soft-tissue features (tenosynovitis 0.90, ligaments 0.87). Inter-reader agreement was lower for joint features (JSN 0.60, osteophytes 0.41, effusion-synovitis 0.03) and cysts 0.65, bone features (BML 0.80, erosion 0.00, BML-entheses 0.49, BML sub-tendon -0.24) and soft-tissue features (tenosynovitis 0.48, ligaments 0.50). Conclusion This preliminary FOAMRIS demonstrated good intra-reader reliability and fair inter-reader reliability when assessing the total feature scores. Further development is required in cohorts with a range of pathologies and to assess the psychometric measurement properties. PMID:28572462

  5. Analysis and Evaluation of Processes and Equipment in Tasks 2 and 4 of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Wolf, M.

    1979-01-01

    To facilitate the task of objectively comparing competing process options, a methodology was needed for the quantitative evaluation of their relative cost effectiveness. Such a methodology was developed and is described, together with three examples for its application. The criterion for the evaluation is the cost of the energy produced by the system. The method permits the evaluation of competing design options for subsystems, based on the differences in cost and efficiency of the subsystems, assuming comparable reliability and service life, or of competing manufacturing process options for such subsystems, which include solar cells or modules. This process option analysis is based on differences in cost, yield, and conversion efficiency contribution of the process steps considered.

  6. Reliability and safety, and the risk of construction damage in mining areas

    NASA Astrophysics Data System (ADS)

    Skrzypczak, Izabela; Kogut, Janusz P.; Kokoszka, Wanda; Oleniacz, Grzegorz

    2018-04-01

    This article concerns the reliability and safety of building structures in mining areas, with a particular emphasis on the quantitative risk analysis of buildings. The issues of threat assessment and risk estimation, in the design of facilities in mining exploitation areas, are presented here, indicating the difficulties and ambiguities associated with their quantification and quantitative analysis. This article presents the concept of quantitative risk assessment of the impact of mining exploitation, in accordance with ISO 13824 [1]. The risk analysis is illustrated through an example of a construction located within an area affected by mining exploitation.

  7. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  8. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  9. Application of real-time PCR for total airborne bacterial assessment: Comparison with epifluorescence microscopy and culture-dependent methods

    NASA Astrophysics Data System (ADS)

    Rinsoz, Thomas; Duquenne, Philippe; Greff-Mirguet, Guylaine; Oppliger, Anne

    Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count non-culturable or non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescence microscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the "impaction on nutrient agar" method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria.

  10. Brief Assessment of Motor Function: Content Validity and Reliability of the Upper Extremity Gross Motor Scale

    PubMed Central

    Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn

    2011-01-01

    Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five ordinal scales designed for quick documentation of gross, fine and oral motor skill levels. Designed to be independent of age and diagnosis, it is intended for use for infants through young adults. An expert panel of 17 physical therapists and 13 occupational therapists refined the content by responding to a standard questionnaire comprised of questions which asked whether each item should be included, is clearly worded, should be reordered higher or lower, is functionally relevant, and is easily discriminated. Ratings of content validity exceeded the criterion except for two items which may represent different perspectives of physical and occupational therapists. The UEGMS was modified using the quantitative and qualitative feedback from the questionnaires. For reliability, five raters scored videotaped motor performances of ten children. Coefficients for inter-rater (0.94) and intra-rater (0.95) reliability were high. The results provide evidence of content validity and reliability of the UEGMS for assessment of upper extremity gross motor skill. PMID:21599568

  11. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  12. Inter-rater agreement in evaluation of disability: systematic review of reproducibility studies.

    PubMed

    Barth, Jürgen; de Boer, Wout E L; Busse, Jason W; Hoving, Jan L; Kedzia, Sarah; Couban, Rachel; Fischer, Katrin; von Allmen, David Y; Spanjer, Jerry; Kunz, Regina

    2017-01-25

     To explore agreement among healthcare professionals assessing eligibility for work disability benefits.  Systematic review and narrative synthesis of reproducibility studies.  Medline, Embase, and PsycINFO searched up to 16 March 2016, without language restrictions, and review of bibliographies of included studies.  Observational studies investigating reproducibility among healthcare professionals performing disability evaluations using a global rating of working capacity and reporting inter-rater reliability by a statistical measure or descriptively. Studies could be conducted in insurance settings, where decisions on ability to work include normative judgments based on legal considerations, or in research settings, where decisions on ability to work disregard normative considerations. : Teams of paired reviewers identified eligible studies, appraised their methodological quality and generalisability, and abstracted results with pretested forms. As heterogeneity of research designs and findings impeded a quantitative analysis, a descriptive synthesis stratified by setting (insurance or research) was performed.  From 4562 references, 101 full text articles were reviewed. Of these, 16 studies conducted in an insurance setting and seven in a research setting, performed in 12 countries, met the inclusion criteria. Studies in the insurance setting were conducted with medical experts assessing claimants who were actual disability claimants or played by actors, hypothetical cases, or short written scenarios. Conditions were mental (n=6, 38%), musculoskeletal (n=4, 25%), or mixed (n=6, 38%). Applicability of findings from studies conducted in an insurance setting to real life evaluations ranged from generalisable (n=7, 44%) and probably generalisable (n=3, 19%) to probably not generalisable (n=6, 37%). Median inter-rater reliability among experts was 0.45 (range intraclass correlation coefficient 0.86 to κ-0.10). Inter-rater reliability was poor in six studies (37%) and excellent in only two (13%). This contrasts with studies conducted in the research setting, where the median inter-rater reliability was 0.76 (range 0.91-0.53), and 71% (5/7) studies achieved excellent inter-rater reliability. Reliability between assessing professionals was higher when the evaluation was guided by a standardised instrument (23 studies, P=0.006). No such association was detected for subjective or chronic health conditions or the studies' generalisability to real world evaluation of disability (P=0.46, 0.45, and 0.65, respectively).  Despite their common use and far reaching consequences for workers claiming disabling injury or illness, research on the reliability of medical evaluations of disability for work is limited and indicates high variation in judgments among assessing professionals. Standardising the evaluation process could improve reliability. Development and testing of instruments and structured approaches to improve reliability in evaluation of disability are urgently needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  14. Quantitative evaluation of pairs and RS steganalysis

    NASA Astrophysics Data System (ADS)

    Ker, Andrew D.

    2004-06-01

    We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.

  15. Identification of appropriate reference genes for normalizing transcript expression by quantitative real-time PCR in Litsea cubeba.

    PubMed

    Lin, Liyuan; Han, Xiaojiao; Chen, Yicun; Wu, Qingke; Wang, Yangdong

    2013-12-01

    Quantitative real-time PCR has emerged as a highly sensitive and widely used method for detection of gene expression profiles, via which accurate detection depends on reliable normalization. Since no single control is appropriate for all experimental treatments, it is generally advocated to select suitable internal controls prior to use for normalization. This study reported the evaluation of the expression stability of twelve potential reference genes in different tissue/organs and six fruit developmental stages of Litsea cubeba in order to screen the superior internal reference genes for data normalization. Two softwares-geNorm, and NormFinder-were used to identify stability of these candidate genes. The cycle threshold difference and coefficient of variance were also calculated to evaluate the expression stability of candidate genes. F-BOX, EF1α, UBC, and TUA were selected as the most stable reference genes across 11 sample pools. F-BOX, EF1α, and EIF4α exhibited the highest expression stability in different tissue/organs and different fruit developmental stages. Besides, a combination of two stable reference genes would be sufficient for gene expression normalization in different fruit developmental stages. In addition, the relative expression profiles of DXS and DXR were evaluated by EF1α, UBC, and SAMDC. The results further validated the reliability of stable reference genes and also highlighted the importance of selecting suitable internal controls for L. cubeba. These reference genes will be of great importance for transcript normalization in future gene expression studies on L. cubeba.

  16. Quantification of Endospore-Forming Firmicutes by Quantitative PCR with the Functional Gene spo0A

    PubMed Central

    Bueche, Matthieu; Wunderlin, Tina; Roussel-Delif, Ludovic; Junier, Thomas; Sauvain, Loic; Jeanneret, Nicole

    2013-01-01

    Bacterial endospores are highly specialized cellular forms that allow endospore-forming Firmicutes (EFF) to tolerate harsh environmental conditions. EFF are considered ubiquitous in natural environments, in particular, those subjected to stress conditions. In addition to natural habitats, EFF are often the cause of contamination problems in anthropogenic environments, such as industrial production plants or hospitals. It is therefore desirable to assess their prevalence in environmental and industrial fields. To this end, a high-sensitivity detection method is still needed. The aim of this study was to develop and evaluate an approach based on quantitative PCR (qPCR). For this, the suitability of functional genes specific for and common to all EFF were evaluated. Seven genes were considered, but only spo0A was retained to identify conserved regions for qPCR primer design. An approach based on multivariate analysis was developed for primer design. Two primer sets were obtained and evaluated with 16 pure cultures, including representatives of the genera Bacillus, Paenibacillus, Brevibacillus, Geobacillus, Alicyclobacillus, Sulfobacillus, Clostridium, and Desulfotomaculum, as well as with environmental samples. The primer sets developed gave a reliable quantification when tested on laboratory strains, with the exception of Sulfobacillus and Desulfotomaculum. A test using sediment samples with a diverse EFF community also gave a reliable quantification compared to 16S rRNA gene pyrosequencing. A detection limit of about 104 cells (or spores) per gram of initial material was calculated, indicating this method has a promising potential for the detection of EFF over a wide range of applications. PMID:23811505

  17. Psychometric Inferences from a Meta-Analysis of Reliability and Internal Consistency Coefficients

    ERIC Educational Resources Information Center

    Botella, Juan; Suero, Manuel; Gambara, Hilda

    2010-01-01

    A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the…

  18. Reliability of Pseudotyped Influenza Viral Particles in Neutralizing Antibody Detection

    PubMed Central

    Yang, Jinghui; Li, Weidong; Long, Yunfeng; Song, Shaohui; Liu, Jing; Zhang, Xinwen; Wang, Xiaoguang; Jiang, Shude; Liao, Guoyang

    2014-01-01

    Background Current influenza control strategies require an active surveillance system. Pseudotyped viral particles (pp) together with the evaluation of pre-existing immunity in a population might satisfy this requirement. However, the reliability of using pp in neutralizing antibody (nAb) detection are undefined. Methodology/Principal Findings Pseudotyped particles of A(H1N1)pmd09 (A/California/7/2009) and HPAI H5N1 (A/Anhui/1/2005), as well as their reassortants, were generated. The reliability of using these pp in nAb detection were compared concurrently with the corresponding viruses by a hemagglutination inhibition test, as well as ELISA-, cytopathic effect-, and fluorescence-based microneutralization assays. In the qualitative detection on nAbs, the pp and their corresponding viruses were in complete agreement, with an R2 value equal to or near 1 in two different populations. In the quantitative detection on nAbs, although the geometric mean titers (95% confidence interval) differed between the pp and viruses, no significant difference was observed. Furthermore, humoral immunity against the reassortants was evaluated; our results indicated strong consistency between the nAbs against reassortant pp and those against naïve pp harboring the same hemagglutinin. Conclusion/Significance The pp displayed high reliability in influenza virus nAb detection. The use of reassortant pp is a safe and convenient strategy for characterizing emerging influenza viruses and surveying the disease burden. PMID:25436460

  19. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  20. Direct quantitative evaluation of disease symptoms on living plant leaves growing under natural light.

    PubMed

    Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki

    2017-06-01

    Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.

  1. Direct agglutination test for serologic diagnosis of Neospora caninum infection.

    PubMed

    Romand, S; Thulliez, P; Dubey, J P

    1998-01-01

    A direct agglutination test was evaluated for the detection and quantitation of IgG antibodies to Neospora caninum in both experimental and natural infections in various animal species. As compared with results obtained by the indirect fluorescent antibody test, the direct agglutination test appeared reliable for the serologic diagnosis of neosporosis in a variety of animal species. The direct agglutination test should provide easily available and inexpensive tools for serologic testing for antibodies to N. caninum in many host species.

  2. Embedded Resistors and Capacitors in Organic and Inorganic Substrates

    NASA Technical Reports Server (NTRS)

    Gerke, Robert David; Ator, Danielle

    2006-01-01

    Embedded resistors and capacitors were purchased from two technology; organic PWB and inorganic low temperature co-fire ceramic (LTCC). Small groups of each substrate were exposed to four environmental tests and several characterization tests to evaluate their performance and reliability. Even though all passive components maintained electrical performance throughout environmental testing, differences between the two technologies were observed. Environmental testing was taken beyond manufacturers' reported testing, but general not taken to failure. When possible, data was quantitatively compared to manufacturer's data.

  3. Translation into Brazilian Portuguese and validation of the "Quantitative Global Scarring Grading System for Post-acne Scarring" *

    PubMed Central

    Cachafeiro, Thais Hofmann; Escobar, Gabriela Fortes; Maldonado, Gabriela; Cestari, Tania Ferreira

    2014-01-01

    The "Quantitative Global Scarring Grading System for Postacne Scarring" was developed in English for acne scar grading, based on the number and severity of each type of scar. The aims of this study were to translate this scale into Brazilian Portuguese and verify its reliability and validity. The study followed five steps: Translation, Expert Panel, Back Translation, Approval of authors and Validation. The translated scale showed high internal consistency and high test-retest reliability, confirming its reproducibility. Therefore, it has been validated for our population and can be recommended as a reliable instrument to assess acne scarring. PMID:25184939

  4. The design and evaluation of psychometric properties for a questionnaire on elderly abuse by family caregivers among older adults on hemodialysis.

    PubMed

    Mahmoudian, Amaneh; Torabi Chafjiri, Razieh; Alipour, Atefeh; Shamsalinia, Abbas; Ghaffari, Fatemeh

    2018-01-01

    Older adults with chronic disease are more vulnerable to abuse. Early and accurate detection of the elderly abuse phenomenon can help identify health-promoting solutions for the elderly, their family, and society. The purpose of this study was to design and evaluate the psychometric properties of a questionnaire on elderly abuse by family caregivers among older adults on hemodialysis. Qualitative and quantitative research methodologies were used to develop the questionnaire. The item pool was compiled from literature reviews and the Delphi method. The literature reviews comprised 22 studies. The psychometric properties of the questionnaire were verified using face, content, and construct validity, and the reliability was tested using Cronbach's alpha reliability. A 57-item questionnaire was developed after the psychometric evaluation. The Kaiser-Meyer-Olkin index and Bartlett's test of sphericity showed reliable results. Seven components from the exploratory content analysis including psychological misbehavior, authority deprivation, physical misbehavior, financial misbehavior, being abandoned, caring neglect, and emotional misbehavior explained 74.769% of the total variance. Cronbach's alpha was 0.98 and the interclass correlation coefficient was r =0.91 responding to the items twice ( p <0.001), which shows a high level of tool stability. This study developed a questionnaire to assess elderly abuse by family caregivers among older adults on hemodialysis. It is recommended as a mini scale that can be used both in statistical and practical studies, and that is valid and reliable. Nurses or other health care providers can use it in health centers, dialysis centers, or at the house of the patient.

  5. The measurement of patient attitudes regarding prenatal and preconception genetic carrier screening and translational behavioral medicine: an integrative review.

    PubMed

    Shiroff, Jennifer J; Gregoski, Mathew J

    2017-06-01

    Measurement of recessive carrier screening attitudes related to conception and pregnancy is necessary to determine current acceptance, and whether behavioral intervention strategies are needed in clinical practice. To evaluate quantitative survey instruments to measure patient attitudes regarding genetic carrier testing prior to conception and pregnancy databases examining patient attitudes regarding genetic screening prior to conception and pregnancy from 2003-2013 were searched yielding 344 articles; eight studies with eight instruments met criteria for inclusion. Data abstraction on theoretical framework, subjects, instrument description, scoring, method of measurement, reliability, validity, feasibility, level of evidence, and outcomes was completed. Reliability information was provided in five studies with an internal consistency of Cronbach's α >0.70. Information pertaining to validity was presented in three studies and included construct validity via factor analysis. Despite limited psychometric information, these questionnaires are self-administered and can be briefly completed, making them a feasible method of evaluation.

  6. Adaptation and Validation of the Brief Sexual Opinion Survey (SOS) in a Colombian Sample and Factorial Equivalence with the Spanish Version

    PubMed Central

    Sierra, Juan Carlos; Soler, Franklin

    2016-01-01

    Attitudes toward sexuality are a key variable for sexual health. It is really important for psychology and education to have adapted and validated questionnaires to evaluate these attitudes. Therefore, the objective of this research was to adapt, validate and calculate the equivalence of the Colombia Sexual Opinion Survey as compared to the same survey from Spain. To this end, a total of eight experts were consulted and 1,167 subjects from Colombia and Spain answered the Sexual Opinion Survey, the Sexual Assertiveness Scale, the Massachusetts General Hospital-Sexual Functioning Questionnaire, and the Sexuality Scale. The evaluation was conducted by online and the results show adequate qualitative and quantitative properties of the items, with adequate reliability and external validity and compliance with the strong invariance between the two countries. Consequently, the Colombia Sexual Opinion Survey is a valid and reliable scale and its scores can be compared with the ones from the Spain survey, with minimum bias. PMID:27627114

  7. Adaptation and Validation of the Brief Sexual Opinion Survey (SOS) in a Colombian Sample and Factorial Equivalence with the Spanish Version.

    PubMed

    Vallejo-Medina, Pablo; Marchal-Bertrand, Laurent; Gómez-Lugo, Mayra; Espada, José Pedro; Sierra, Juan Carlos; Soler, Franklin; Morales, Alexandra

    2016-01-01

    Attitudes toward sexuality are a key variable for sexual health. It is really important for psychology and education to have adapted and validated questionnaires to evaluate these attitudes. Therefore, the objective of this research was to adapt, validate and calculate the equivalence of the Colombia Sexual Opinion Survey as compared to the same survey from Spain. To this end, a total of eight experts were consulted and 1,167 subjects from Colombia and Spain answered the Sexual Opinion Survey, the Sexual Assertiveness Scale, the Massachusetts General Hospital-Sexual Functioning Questionnaire, and the Sexuality Scale. The evaluation was conducted by online and the results show adequate qualitative and quantitative properties of the items, with adequate reliability and external validity and compliance with the strong invariance between the two countries. Consequently, the Colombia Sexual Opinion Survey is a valid and reliable scale and its scores can be compared with the ones from the Spain survey, with minimum bias.

  8. A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors

    PubMed Central

    Shen, Zhengguang; Wang, Qi

    2013-01-01

    The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576

  9. The redoubtable ecological periodic table

    EPA Science Inventory

    Ecological periodic tables are repositories of reliable information on quantitative, predictably recurring (periodic) habitat–community patterns and their uncertainty, scaling and transferability. Their reliability derives from their grounding in sound ecological principle...

  10. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project.

    PubMed

    Whitford, Heather M; Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. In this sample and for these questions, SMS was a reliable and valid method for capturing research data.

  11. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project

    PubMed Central

    Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    Objective To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. Materials and methods The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Results Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. Conclusion In this sample and for these questions, SMS was a reliable and valid method for capturing research data. PMID:22539081

  12. The reliability of non-invasive biophysical outcome measures for evaluating normal and hyperkeratotic foot skin.

    PubMed

    Hashmi, Farina; Wright, Ciaran; Nester, Christopher; Lam, Sharon

    2015-01-01

    Hyperkeratosis of foot skin is a common skin problem affecting people of different ages. The clinical presentation of this condition can range from dry flaky skin, which can lead to fissures, to hard callused skin which is often painful and debilitating. The purpose of this study was to test the reliability of certain non-invasive skin measurement devices on foot skin in normal and hyperkeratotic states, with a view to confirming their use as quantitative outcome measures in future clinical trials. Twelve healthy adult participants with a range of foot skin conditions (xerotic skin, heel fissures and plantar calluses) were recruited to the study. Measurements of normal and hyperkeratotic skin sites were taken using the following devices: Corneometer® CM 825, Cutometer® 580 MPA, Reviscometer® RVM 600, Visioline® VL 650 Quantiride® and Visioscan® VC 98, by two investigators on two consecutive days. The intra and inter rater reliability and standard error of measurement for each device was calculated. The data revealed the majority of the devices to be reliable measurement tools for normal and hyperkeratotic foot skin (ICC values > 0.6). The surface evaluation parameters for skin: SEsc and SEsm have greater reliability compared to the SEr measure. The Cutometer® is sensitive to soft tissue movement within the probe, therefore measurement of plantar soft tissue areas should be approached with caution. Reviscometer® measures on callused skin demonstrated an unusually high degree of error. These results confirm the intra and inter rater reliability of the Corneometer®, Cutometer®, Visioline® and Visioscan® in quantifying specific foot skin biophysical properties.

  13. Reliability theory for repair service organization simulation and increase of innovative attraction of industrial enterprises

    NASA Astrophysics Data System (ADS)

    Dolzhenkova, E. V.; Iurieva, L. V.

    2018-05-01

    The study presents the author's algorithm for the industrial enterprise repair service organization simulation based on the reliability theory, as well as the results of its application. The monitoring of the industrial enterprise repair service organization is proposed to perform on the basis of the enterprise's state indexes for the main resources (equipment, labour, finances, repair areas), which allows quantitative evaluation of the reliability level as a resulting summary rating of the said parameters and the ensuring of an appropriate level of the operation reliability of the serviced technical objects. Under the conditions of the tough competition, the following approach is advisable: the higher efficiency of production and a repair service itself, the higher the innovative attractiveness of an industrial enterprise. The results of the calculations show that in order to prevent inefficient losses of production and to reduce the repair costs, it is advisable to apply the reliability theory. The overall reliability rating calculated on the basis of the author's algorithm has low values. The processing of the statistical data forms the reliability characteristics for the different workshops and services of an industrial enterprise, which allows one to define the failure rates of the various units of equipment and to establish the reliability indexes necessary for the subsequent mathematical simulation. The proposed simulating algorithm contributes to an increase of the efficiency of the repair service organization and improvement of the innovative attraction of an industrial enterprise.

  14. Quantitation of TGF-beta1 mRNA in porcine mesangial cells by comparative kinetic RT/PCR: comparison with ribonuclease protection assay and in situ hybridization.

    PubMed

    Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F

    2001-01-01

    Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.

  15. A 96-well-plate-based optical method for the quantitative and qualitative evaluation of Pseudomonas aeruginosa biofilm formation and its application to susceptibility testing.

    PubMed

    Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne

    2010-08-01

    A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.

  16. Objective evaluation of cutaneous thermal sensivity

    NASA Technical Reports Server (NTRS)

    Vanbeaumont, W.

    1972-01-01

    The possibility of obtaining reliable and objective quantitative responses was investigated under conditions where only temperature changes in localized cutaneous areas evoked measurable changes in remote sudomotor activity. Both male and female subjects were studied to evaluate sex difference in thermal sensitivity. The results discussed include: sweat rate responses to contralateral cooling, comparison of sweat rate responses between men and women to contralateral cooling, influence of the menstrual cycle on the sweat rate responses to contralateral cooling, comparison of threshold of sweating responses between men and women, and correlation of latency to threshold for whole body sweating. It is concluded that the quantitative aspects of the reflex response is affected by both the density and activation of receptors as well as the rate of heat loss; men responded 8-10% more frequently than women to thermode cooling, the magnitude of responses being greater for men; and women responded 7-9% more frequently to thermode cooling on day 1 of menstruation, as compared to day 15.

  17. Current perspectives of CASA applications in diverse mammalian spermatozoa.

    PubMed

    van der Horst, Gerhard; Maree, Liana; du Plessis, Stefan S

    2018-03-26

    Since the advent of computer-aided sperm analysis (CASA) some four decades ago, advances in computer technology and software algorithms have helped establish it as a research and diagnostic instrument for the analysis of spermatozoa. Despite mammalian spermatozoa being the most diverse cell type known, CASA is a great tool that has the capacity to provide rapid, reliable and objective quantitative assessment of sperm quality. This paper provides contemporary research findings illustrating the scientific and commercial applications of CASA and its ability to evaluate diverse mammalian spermatozoa (human, primates, rodents, domestic mammals, wildlife species) at both structural and functional levels. The potential of CASA to quantitatively measure essential aspects related to sperm subpopulations, hyperactivation, morphology and morphometry is also demonstrated. Furthermore, applications of CASA are provided for improved mammalian sperm quality assessment, evaluation of sperm functionality and the effect of different chemical substances or pathologies on sperm fertilising ability. It is clear that CASA has evolved significantly and is currently superior to many manual techniques in the research and clinical setting.

  18. Quantitative measurement of carbon nanotubes released from their composites by thermal carbon analysis

    NASA Astrophysics Data System (ADS)

    Ogura, I.; Kotake, M.; Ata, S.; Honda, K.

    2017-06-01

    The release of free carbon nanotubes (CNTs) and CNTs partly embedded in matrix debris into the air may occur during mechanical and abrasion processes involving CNT composites. Since the harmful effects of CNT-matrix mixtures have not yet been fully evaluated, it is considered that any exposure to CNTs, including CNT-matrix mixtures, should be measured and controlled. Thermal carbon analysis, such as Method 5040 of the National Institute for Occupational Safety and Health, is one of the most reliable quantitative methods for measuring CNTs in the air. However, when CNTs are released together with polymer matrices, this technique may be inapplicable. In this study, we evaluated the potential for using thermal carbon analysis to determine CNTs in the presence of polymer matrices. Our results showed that thermal carbon analysis was potentially capable of determining CNTs in distinction from polyamide 12, polybutylene terephthalate, polypropylene, and polyoxymethylene. However, it was difficult to determine CNTs in the presence of polyethylene terephthalate, polycarbonate, polyetheretherketone, or polyamide 6.

  19. Droplet Digital PCR for Minimal Residual Disease Detection in Mature Lymphoproliferative Disorders.

    PubMed

    Drandi, Daniela; Ferrero, Simone; Ladetto, Marco

    2018-01-01

    Minimal residual disease (MRD) detection has a powerful prognostic relevance for response evaluation and prediction of relapse in hematological malignancies. Real-time quantitative PCR (qPCR) has become the settled and standardized method for MRD assessment in lymphoid disorders. However, qPCR is a relative quantification approach, since it requires a reference standard curve. Droplet digital TM PCR (ddPCR TM ) allows a reliable absolute tumor burden quantification withdrawing the need for preparing, for each experiment, a tumor-specific standard curve. We have recently shown that ddPCR has a good concordance with qPCR and could be a feasible and reliable tool for MRD monitoring in mature lymphoproliferative disorders. In this chapter we describe the experimental workflow, from the detection of the clonal molecular marker to the MRD monitoring by ddPCR, in patients affected by multiple myeloma, mantle cell lymphoma and follicular lymphoma. However, standardization programs among different laboratories are needed in order to ensure the reliability and reproducibility of ddPCR-based MRD results.

  20. A Meta-Analysis of Reliability Coefficients in Second Language Research

    ERIC Educational Resources Information Center

    Plonsky, Luke; Derrick, Deirdre J.

    2016-01-01

    Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates,…

  1. Reliable LC-MS quantitative glycomics using iGlycoMab stable isotope labeled glycans as internal standards.

    PubMed

    Zhou, Shiyue; Tello, Nadia; Harvey, Alex; Boyes, Barry; Orlando, Ron; Mechref, Yehia

    2016-06-01

    Glycans have numerous functions in various biological processes and participate in the progress of diseases. Reliable quantitative glycomic profiling techniques could contribute to the understanding of the biological functions of glycans, and lead to the discovery of potential glycan biomarkers for diseases. Although LC-MS is a powerful analytical tool for quantitative glycomics, the variation of ionization efficiency and MS intensity bias are influencing quantitation reliability. Internal standards can be utilized for glycomic quantitation by MS-based methods to reduce variability. In this study, we used stable isotope labeled IgG2b monoclonal antibody, iGlycoMab, as an internal standard to reduce potential for errors and to reduce variabililty due to sample digestion, derivatization, and fluctuation of nanoESI efficiency in the LC-MS analysis of permethylated N-glycans released from model glycoproteins, human blood serum, and breast cancer cell line. We observed an unanticipated degradation of isotope labeled glycans, tracked a source of such degradation, and optimized a sample preparation protocol to minimize degradation of the internal standard glycans. All results indicated the effectiveness of using iGlycoMab to minimize errors originating from sample handling and instruments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. 76 FR 13018 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...

  3. Reliability of a novel, semi-quantitative scale for classification of structural brain magnetic resonance imaging in children with cerebral palsy.

    PubMed

    Fiori, Simona; Cioni, Giovanni; Klingels, Katrjin; Ortibus, Els; Van Gestel, Leen; Rose, Stephen; Boyd, Roslyn N; Feys, Hilde; Guzzetta, Andrea

    2014-09-01

    To describe the development of a novel rating scale for classification of brain structural magnetic resonance imaging (MRI) in children with cerebral palsy (CP) and to assess its interrater and intrarater reliability. The scale consists of three sections. Section 1 contains descriptive information about the patient and MRI. Section 2 contains the graphical template of brain hemispheres onto which the lesion is transposed. Section 3 contains the scoring system for the quantitative analysis of the lesion characteristics, grouped into different global scores and subscores that assess separately side, regions, and depth. A larger interrater and intrarater reliability study was performed in 34 children with CP (22 males, 12 females; mean age at scan of 9 y 5 mo [SD 3 y 3 mo], range 4 y-16 y 11 mo; Gross Motor Function Classification System level I, [n=22], II [n=10], and level III [n=2]). Very high interrater and intrarater reliability of the total score was found with indices above 0.87. Reliability coefficients of the lobar and hemispheric subscores ranged between 0.53 and 0.95. Global scores for hemispheres, basal ganglia, brain stem, and corpus callosum showed reliability coefficients above 0.65. This study presents the first visual, semi-quantitative scale for classification of brain structural MRI in children with CP. The high degree of reliability of the scale supports its potential application for investigating the relationship between brain structure and function and examining treatment response according to brain lesion severity in children with CP. © 2014 Mac Keith Press.

  4. Development of a multidimensional labour satisfaction questionnaire: dimensions, validity, and internal reliability

    PubMed Central

    Smith, L

    2001-01-01

    Background—No published quantitative instrument exists to measure maternal satisfaction with the quality of different models of labour care in the UK. Methods—A quantitative psychometric multidimensional maternal satisfaction questionnaire, the Women's Views of Birth Labour Satisfaction Questionnaire (WOMBLSQ), was developed using principal components analysis with varimax rotation of successive versions. Internal reliability and content and construct validity were assessed. Results—Of 300 women sent the first version (WOMBLSQ1), 120 (40%) replied; of 300 sent WOMBLSQ2, 188 (62.7%) replied; of 500 women sent WOMBLSQ3, 319 (63.8%) replied; and of 2400 women sent WOMBLSQ4, 1683 (70.1%) replied. The latter two versions consisted of 10 dimensions in addition to general satisfaction. These were (Cronbach's alpha): professional support in labour (0.91), expectations of labour (0.90), home assessment in early labour (0.90), holding the baby (0.87), support from husband/partner (0.83), pain relief in labour (0.83), pain relief immediately after labour (0.65), knowing labour carers (0.82), labour environment (0.80), and control in labour (0.62). There were moderate correlations (range 0.16–0.73) between individual dimensions and the general satisfaction scale (0.75). Scores on individual dimensions were significantly related to a range of clinical and demographic variables. Conclusion—This multidimensional labour satisfaction instrument has good validity and internal reliability. It could be used to assess care in labour across different models of maternity care, or as a prelude to in depth exploration of specific areas of concern. Its external reliability and transferability to care outside the South West region needs further evaluation, particularly in terms of ethnicity and social class. Key Words: Women's Views of Birth Labour Satisfaction Questionnaire (WOMBLSQ); labour; questionnaire PMID:11239139

  5. Shear Wave Elastography--A New Quantitative Assessment of Post-Irradiation Neck Fibrosis.

    PubMed

    Liu, K H; Bhatia, K; Chu, W; He, L T; Leung, S F; Ahuja, A T

    2015-08-01

    Shear wave elastography (SWE) is a new technique which provides quantitative assessment of soft tissue stiffness. The aim of this study was to assess the reliability of SWE stiffness measurements and its usefulness in evaluating post-irradiation neck fibrosis. 50 subjects (25 patients with previous radiotherapy to the neck and 25 sex and age-matched controls) were recruited for comparison of SWE stiffness measurements (Aixplorer, Supersonic Imagine). 30 subjects (16 healthy individuals and 14 post-irradiated patients) were recruited for a reliability study of SWE stiffness measurements. SWE stiffness measurements of the sternocleidomastoid muscle and the overlying subcutaneous tissues of the neck were made. The cross-sectional area and thickness of the sternocleidomastoid muscle and the overlying subcutaneous tissue thickness of the neck were also measured. The post-irradiation duration of the patients was recorded. The intraclass correlation coefficients for the intraoperator and interoperator reliability of deep and subcutaneous tissue SWE stiffness ranged from 0.90-0.99 and 0.77-0.94, respectively. The SWE stiffness measurements (mean +/- SD) of deep and subcutaneous tissues were significantly higher in the post-irradiated patients (64.6 ± 46.8 kPa and 63.9 ± 53.1 kPa, respectively) than the sex and age-matched controls (19.9 ± 7.8 kPa and 15.3 ± 8.37 respectively) (p < 0.001). The SWE stiffness increased with increasing post-irradiation therapy duration in the Kruskal Wallis test (p < 0.001) and correlated with muscle atrophy and subcutaneous tissue thinning (p < 0.01). SWE is a reliable technique and may potentially be an objective and specific tool in quantifying deep and subcutaneous tissue stiffness, which in turn reflects the severity of neck fibrosis. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Translation, adaptation and validation of the Coronary Revascularization Outcome Questionnaire into Greek.

    PubMed

    Takousi, Maria G; Schmeer, Stefanie; Manaras, Irene; Olympios, Christoforos D; Fakiolas, Constantine N; Makos, Georgios; Troop, Nick A

    2016-04-01

    Evaluating the impact of coronary revascularization on patients' health related quality of life with a patient-based and disease-specific tool is important for drawing conclusions about treatment and outcomes. This study reports on the translation, adaptation and psychometric evaluation of a Greek version of the Coronary Revascularization Outcome Questionnaire (CROQ-Gr). A total of 609 (81.7% male) patients who had undergone coronary revascularization (percutaneous coronary intervention or coronary artery bypass grafting) were recruited from four hospitals in Athens. After translating the CROQ into Greek, a preliminary qualitative study and a pilot quantitative study were conducted. A full psychometric evaluation was carried out on the main study's data. The psychometric evaluation demonstrated that the CROQ-Gr is acceptable to patients (high response rate, low missing data) and has a good level of reliability (internal consistency >0.70, test-retest reliability >0.90) and validity (both content and construct validity). The results of this study show the CROQ-Gr to be a psychometrically rigorous patient-based measure of outcomes of coronary revascularization. It would be appropriate for use in evaluative research as well as a routine clinical tool to aid cardiologists in monitoring the outcomes of care. © The European Society of Cardiology 2015.

  7. Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI.

    PubMed

    Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana

    2018-01-01

    Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n  = 20), quantitatively and qualitatively in relatively healthy older subjects ( n  = 96), and qualitatively in patients from a memory clinic ( n  = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ  = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.

  8. A novel quantified bitterness evaluation model for traditional Chinese herbs based on an animal ethology principle.

    PubMed

    Han, Xue; Jiang, Hong; Han, Li; Xiong, Xi; He, Yanan; Fu, Chaomei; Xu, Runchun; Zhang, Dingkun; Lin, Junzhi; Yang, Ming

    2018-03-01

    Traditional Chinese herbs (TCH) are currently gaining attention in disease prevention and health care plans. However, their general bitter taste hinders their use. Despite the development of a variety of taste evaluation methods, it is still a major challenge to establish a quantitative detection technique that is objective, authentic and sensitive. Based on the two-bottle preference test (TBP), we proposed a novel quantitative strategy using a standardized animal test and a unified quantitative benchmark. To reduce the difference of results, the methodology of TBP was optimized. The relationship between the concentration of quinine and animal preference index (PI) was obtained. Then the PI of TCH was measured through TBP, and bitterness results were converted into a unified numerical system using the relationship of concentration and PI. To verify the authenticity and sensitivity of quantified results, human sensory testing and electronic tongue testing were applied. The quantified results showed a good discrimination ability. For example, the bitterness of Coptidis Rhizoma was equal to 0.0579 mg/mL quinine, and Nelumbinis Folium was equal to 0.0001 mg/mL. The validation results proved that the new assessment method for TCH was objective and reliable. In conclusion, this study provides an option for the quantification of bitterness and the evaluation of taste masking effects.

  9. Automated characterization of normal and pathologic lung tissue by topological texture analysis of multidetector CT

    NASA Astrophysics Data System (ADS)

    Boehm, H. F.; Fink, C.; Becker, C.; Reiser, M.

    2007-03-01

    Reliable and accurate methods for objective quantitative assessment of parenchymal alterations in the lung are necessary for diagnosis, treatment and follow-up of pulmonary diseases. Two major types of alterations are pulmonary emphysema and fibrosis, emphysema being characterized by abnormal enlargement of the air spaces distal to the terminal, nonrespiratory bronchiole, accompanied by destructive changes of the alveolar walls. The main characteristic of fibrosis is coursening of the interstitial fibers and compaction of the pulmonary tissue. With the ability to display anatomy free from superimposing structures and greater visual clarity, Multi-Detector-CT has shown to be more sensitive than the chest radiograph in identifying alterations of lung parenchyma. In automated evaluation of pulmonary CT-scans, quantitative image processing techniques are applied for objective evaluation of the data. A number of methods have been proposed in the past, most of which utilize simple densitometric tissue features based on the mean X-ray attenuation coefficients expressed in terms of Hounsfield Units [HU]. Due to partial volume effects, most of the density-based methodologies tend to fail, namely in cases, where emphysema and fibrosis occur within narrow spatial limits. In this study, we propose a methodology based upon the topological assessment of graylevel distribution in the 3D image data of lung tissue which provides a way of improving quantitative CT evaluation. Results are compared to the more established density-based methods.

  10. Multiple internal standard normalization for improving HS-SPME-GC-MS quantitation in virgin olive oil volatile organic compounds (VOO-VOCs) profile.

    PubMed

    Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca

    2017-04-01

    The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Establishing optimal quantitative-polymerase chain reaction assays for routine diagnosis and tracking of minimal residual disease in JAK2-V617F-associated myeloproliferative neoplasms: a joint European LeukemiaNet/MPN&MPNr-EuroNet (COST action BM0902) study.

    PubMed

    Jovanovic, J V; Ivey, A; Vannucchi, A M; Lippert, E; Oppliger Leibundgut, E; Cassinat, B; Pallisgaard, N; Maroc, N; Hermouet, S; Nickless, G; Guglielmelli, P; van der Reijden, B A; Jansen, J H; Alpermann, T; Schnittger, S; Bench, A; Tobal, K; Wilkins, B; Cuthill, K; McLornan, D; Yeoman, K; Akiki, S; Bryon, J; Jeffries, S; Jones, A; Percy, M J; Schwemmers, S; Gruender, A; Kelley, T W; Reading, S; Pancrazzi, A; McMullin, M F; Pahl, H L; Cross, N C P; Harrison, C N; Prchal, J T; Chomienne, C; Kiladjian, J J; Barbui, T; Grimwade, D

    2013-10-01

    Reliable detection of JAK2-V617F is critical for accurate diagnosis of myeloproliferative neoplasms (MPNs); in addition, sensitive mutation-specific assays can be applied to monitor disease response. However, there has been no consistent approach to JAK2-V617F detection, with assays varying markedly in performance, affecting clinical utility. Therefore, we established a network of 12 laboratories from seven countries to systematically evaluate nine different DNA-based quantitative PCR (qPCR) assays, including those in widespread clinical use. Seven quality control rounds involving over 21,500 qPCR reactions were undertaken using centrally distributed cell line dilutions and plasmid controls. The two best-performing assays were tested on normal blood samples (n=100) to evaluate assay specificity, followed by analysis of serial samples from 28 patients transplanted for JAK2-V617F-positive disease. The most sensitive assay, which performed consistently across a range of qPCR platforms, predicted outcome following transplant, with the mutant allele detected a median of 22 weeks (range 6-85 weeks) before relapse. Four of seven patients achieved molecular remission following donor lymphocyte infusion, indicative of a graft vs MPN effect. This study has established a robust, reliable assay for sensitive JAK2-V617F detection, suitable for assessing response in clinical trials, predicting outcome and guiding management of patients undergoing allogeneic transplant.

  12. A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.

    PubMed

    Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco

    2018-01-01

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.

  13. 76 FR 12072 - Guidance for Agency Information Collection Activities: Proposed Collection; Comment Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... not statistical surveys that yield quantitative results that can be generalized to the population of... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. No comments were received in response...

  14. Rapid Trace Detection and Isomer Quantitation of Pesticide Residues via Matrix-Assisted Laser Desorption/Ionization Fourier Transform Ion Cyclotron Resonance Mass Spectrometry.

    PubMed

    Wu, Xinzhou; Li, Weifeng; Guo, Pengran; Zhang, Zhixiang; Xu, Hanhong

    2018-04-18

    Matrix-assisted laser desorption/ionization Fourier transform ion cyclotron resonance mass spectrometry (MALDI-FTICR-MS) has been applied for rapid, sensitive, undisputed, and quantitative detection of pesticide residues on fresh leaves with little sample pretreatment. Various pesticides (insecticides, bactericides, herbicides, and acaricides) are detected directly in the complex matrix with excellent limits of detection down to 4 μg/L. FTICR-MS could unambiguously identify pesticides with tiny mass differences (∼0.017 75 Da), thereby avoiding false-positive results. Remarkably, pesticide isomers can be totally discriminated by use of diagnostic fragments, and quantitative analysis of pesticide isomers is demonstrated. The present results expand the horizons of the MALDI-FTICR-MS platform in the reliable determination of pesticides, with integrated advantages of ultrahigh mass resolution and accuracy. This method provides growing evidence for the resultant detrimental effects of pesticides, expediting the identification and evaluation of innovative pesticides.

  15. Quantitative analysis of Al-Si alloy using calibration free laser induced breakdown spectroscopy (CF-LIBS)

    NASA Astrophysics Data System (ADS)

    Shakeel, Hira; Haq, S. U.; Aisha, Ghulam; Nadeem, Ali

    2017-06-01

    The quantitative analysis of the standard aluminum-silicon alloy has been performed using calibration free laser induced breakdown spectroscopy (CF-LIBS). The plasma was produced using the fundamental harmonic (1064 nm) of the Nd: YAG laser and the emission spectra were recorded at 3.5 μs detector gate delay. The qualitative analysis of the emission spectra confirms the presence of Mg, Al, Si, Ti, Mn, Fe, Ni, Cu, Zn, Sn, and Pb in the alloy. The background subtracted and self-absorption corrected emission spectra were used for the estimation of plasma temperature as 10 100 ± 300 K. The plasma temperature and self-absorption corrected emission lines of each element have been used for the determination of concentration of each species present in the alloy. The use of corrected emission intensities and accurate evaluation of plasma temperature yield reliable quantitative analysis up to a maximum 2.2% deviation from reference sample concentration.

  16. Fluorescent nanodiamonds enable quantitative tracking of human mesenchymal stem cells in miniature pigs

    NASA Astrophysics Data System (ADS)

    Su, Long-Jyun; Wu, Meng-Shiue; Hui, Yuen Yung; Chang, Be-Ming; Pan, Lei; Hsu, Pei-Chen; Chen, Yit-Tsong; Ho, Hong-Nerng; Huang, Yen-Hua; Ling, Thai-Yen; Hsu, Hsao-Hsun; Chang, Huan-Cheng

    2017-03-01

    Cell therapy is a promising strategy for the treatment of human diseases. While the first use of cells for therapeutic purposes can be traced to the 19th century, there has been a lack of general and reliable methods to study the biodistribution and associated pharmacokinetics of transplanted cells in various animal models for preclinical evaluation. Here, we present a new platform using albumin-conjugated fluorescent nanodiamonds (FNDs) as biocompatible and photostable labels for quantitative tracking of human placenta choriodecidual membrane-derived mesenchymal stem cells (pcMSCs) in miniature pigs by magnetic modulation. With this background-free detection technique and time-gated fluorescence imaging, we have been able to precisely determine the numbers as well as positions of the transplanted FND-labeled pcMSCs in organs and tissues of the miniature pigs after intravenous administration. The method is applicable to single-cell imaging and quantitative tracking of human stem/progenitor cells in rodents and other animal models as well.

  17. Initial description of a quantitative, cross-species (chimpanzee-human) social responsiveness measure

    PubMed Central

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve; Constantino, John; Povinelli, Daniel; Pruett, John R.

    2011-01-01

    Objective Comparative studies of social responsiveness, an ability that is impaired in autistic spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species (human-chimpanzee) social responsiveness measure. Method We translated the Social Responsiveness Scale (SRS), an instrument that quantifies human social responsiveness, into an analogous instrument for chimpanzees. We then retranslated this "Chimp SRS" into a human "Cross-Species SRS" (XSRS). We evaluated three groups of chimpanzees (n=29) with the Chimp SRS and typical and autistic spectrum disorder (ASD) human children (n=20) with the XSRS. Results The Chimp SRS demonstrated strong inter-rater reliability at the three sites (ranges for individual ICCs: .534–.866 and mean ICCs: .851–.970). As has been observed in humans, exploratory principal components analysis of Chimp SRS scores supports a single factor underlying chimpanzee social responsiveness. Human subjects' XSRS scores were fully concordant with their SRS scores (r=.976, p=.001) and distinguished appropriately between typical and ASD subjects. One chimpanzee known for inappropriate social behavior displayed a significantly higher score than all other chimpanzees at its site, demonstrating the scale's ability to detect impaired social responsiveness in chimpanzees. Conclusion Our initial cross-species social responsiveness scale proved reliable and discriminated differences in social responsiveness across (in a relative sense) and within (in a more objectively quantifiable manner) humans and chimpanzees. PMID:21515200

  18. Assessing Psychodynamic Conflict.

    PubMed

    Simmonds, Joshua; Constantinides, Prometheas; Perry, J Christopher; Drapeau, Martin; Sheptycki, Amanda R

    2015-09-01

    Psychodynamic psychotherapies suggest that symptomatic relief is provided, in part, with the resolution of psychic conflicts. Clinical researchers have used innovative methods to investigate such phenomenon. This article aims to review the literature on quantitative psychodynamic conflict rating scales. An electronic search of the literature was conducted to retrieve quantitative observer-rated scales used to assess conflict noting each measure's theoretical model, information source, and training and clinical experience required. Scales were also examined for levels of reliability and validity. Five quantitative observer-rated conflict scales were identified. Reliability varied from poor to excellent with each measure demonstrating good validity. However a small number of studies and limited links to current conflict theory suggest further clinical research is needed.

  19. A quantitative analysis of the F18 flight control system

    NASA Technical Reports Server (NTRS)

    Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann

    1993-01-01

    This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.

  20. Measuring professional satisfaction in Greek nurses: combination of qualitative and quantitative investigation to evaluate the validity and reliability of the Index of Work Satisfaction.

    PubMed

    Karanikola, Maria N K; Papathanassoglou, Elizabeth D E

    2015-02-01

    The Index of Work Satisfaction (IWS) is a comprehensive scale assessing nurses' professional satisfaction. The aim of the present study was to explore: a) the applicability, reliability and validity of the Greek version of the IWS and b) contrasts among the factors addressed by IWS against the main themes emerging from a qualitative phenomenological investigation of nurses' professional experiences. A descriptive correlational design was applied using a sample of 246 emergency and critical care nurses. Internal consistency and test-retest reliability were tested. Construct and content validity were assessed by factor analysis, and through qualitative phenomenological analysis with a purposive sample of 12 nurses. Scale factors were contrasted to qualitative themes to assure that IWS embraces all aspects of Greek nurses' professional satisfaction. The internal consistency (α = 0.81) and test-retest (tau = 1, p < 0.0001) reliability were adequate. Following appropriate modifications, factor analysis confirmed the construct validity of the scale and subscales. The qualitative data partially clarified the low reliability of one subscale. The Greek version of the IWS scale is supported for use in acute care. The mixed methods approach constitutes a powerful tool for transferring scales to different cultures and healthcare systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    PubMed

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  2. Validity and sensitivity to change of the semi-quantitative OMERACT ultrasound scoring system for tenosynovitis in patients with rheumatoid arthritis.

    PubMed

    Ammitzbøll-Danielsen, Mads; Østergaard, Mikkel; Naredo, Esperanza; Terslev, Lene

    2016-12-01

    The aim was to evaluate the metric properties of the semi-quantitative OMERACT US scoring system vs a novel quantitative US scoring system for tenosynovitis, by testing its intra- and inter-reader reliability, sensitivity to change and comparison with clinical tenosynovitis scoring in a 6-month follow-up study. US and clinical assessments of the tendon sheaths of the clinically most affected hand and foot were performed at baseline, 3 and 6 months in 51 patients with RA. Tenosynovitis was assessed using the semi-quantitative scoring system (0-3) proposed by the OMERACT US group and a new quantitative US evaluation (0-100). A sum for US grey scale (GS), colour Doppler (CD) and pixel index (PI), respectively, was calculated for each patient. In 20 patients, intra- and inter-observer agreement was established between two independent investigators. A binary clinical tenosynovitis score was performed, calculating a sum score per patient. The intra- and inter-observer agreements for US tenosynovitis assessments were very good at baseline and for change for GS and CD, but less good for PI. The smallest detectable change was 0.97 for GS, 0.93 for CD and 30.1 for PI. The sensitivity to change from month 0 to 6 was high for GS and CD, and slightly higher than for clinical tenosynovitis score and PI. This study demonstrated an excellent intra- and inter-reader agreement between two investigators for the OMERACT US scoring system for tenosynovitis and a high ability to detect changes over time. Quantitative assessment by PI did not add further information. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Translation, cultural adaption, and test-retest reliability of Chinese versions of the Edinburgh Handedness Inventory and Waterloo Footedness Questionnaire.

    PubMed

    Yang, Nan; Waddington, Gordon; Adams, Roger; Han, Jia

    2018-05-01

    Quantitative assessments of handedness and footedness are often required in studies of human cognition and behaviour, yet no reliable Chinese versions of commonly used handedness and footedness questionnaires are available. Accordingly, the objective of the present study was to translate the Edinburgh Handedness Inventory (EHI) and the Waterloo Footedness Questionnaire-Revised (WFQ-R) into Mandarin Chinese and to evaluate the reliability and validity of these translated versions in healthy Chinese people. In the first stage of the study, Chinese versions of the EHI and WFQ-R were produced from a process of translation, back translation and examination, with necessary cultural adaptations. The second stage involved determining the reliability and validity of the translated EHI and WFQ-R for the Chinese population. One hundred and ten Chinese participants were tested online, and the results showed that the Cronbach's alpha coefficient of internal consistency was 0.877 for the translated EHI and 0.855 for the translated WFQ-R. Another 170 Chinese participants were tested and re-tested after a 30-day interval. The intra-class correlation coefficients showed high reliability, 0.898 for the translated EHI and 0.869 for the translated WFQ-R. This preliminary validation study found the translated versions to be reliable and valid tools for assessing handedness and footedness in this population.

  4. Assessing the Reliability of Material Flow Analysis Results: The Cases of Rhenium, Gallium, and Germanium in the United States Economy.

    PubMed

    Meylan, Grégoire; Reck, Barbara K; Rechberger, Helmut; Graedel, Thomas E; Schwab, Oliver

    2017-10-17

    Decision-makers traditionally expect "hard facts" from scientific inquiry, an expectation that the results of material flow analyses (MFAs) can hardly meet. MFA limitations are attributable to incompleteness of flowcharts, limited data quality, and model assumptions. Moreover, MFA results are, for the most part, based less on empirical observation but rather on social knowledge construction processes. Developing, applying, and improving the means of evaluating and communicating the reliability of MFA results is imperative. We apply two recently proposed approaches for making quantitative statements on MFA reliability to national minor metals systems: rhenium, gallium, and germanium in the United States in 2012. We discuss the reliability of results in policy and management contexts. The first approach consists of assessing data quality based on systematic characterization of MFA data and the associated meta-information and quantifying the "information content" of MFAs. The second is a quantification of data inconsistencies indicated by the "degree of data reconciliation" between the data and the model. A high information content and a low degree of reconciliation indicate reliable or certain MFA results. This article contributes to reliability and uncertainty discourses in MFA, exemplifying the usefulness of the approaches in policy and management, and to raw material supply discussions by providing country-level information on three important minor metals often considered critical.

  5. The Practice of Health Program Evaluation.

    PubMed

    Lewis, Sarah R

    2017-11-01

    The Practice of Health Program Evaluation provides an overview of the evaluation process for public health programs while diving deeper to address select advanced concepts and techniques. The book unfolds evaluation as a three-phased process consisting of identification of evaluation questions, data collection and analysis, and dissemination of results and recommendations. The text covers research design, sampling methods, as well as quantitative and qualitative approaches. Types of evaluation are also discussed, including economic assessment and systems research as relative newcomers. Aspects critical to conducting a successful evaluation regardless of type or research design are emphasized, such as stakeholder engagement, validity and reliability, and adoption of sound recommendations. The book encourages evaluators to document their approach by developing an evaluation plan, a data analysis plan, and a dissemination plan, in order to help build consensus throughout the process. The evaluative text offers a good bird's-eye view of the evaluation process, while offering guidance for evaluation experts on how to navigate political waters and advocate for their findings to help affect change.

  6. Shear-wave sonoelastography for assessing masseter muscle hardness in comparison with strain sonoelastography: study with phantoms and healthy volunteers

    PubMed Central

    Nakayama, Miwa; Nishiyama, Wataru; Nozawa, Michihito

    2016-01-01

    Objectives Shear-wave sonoelastography is expected to facilitate low operator dependency, high reproducibility and quantitative evaluation, whereas there are few reports on available normative values of in vivo tissue in head and neck fields. The purpose of this study was to examine the reliabilities on measuring hardness using shear-wave sonoelastography and to clarify normal values of masseter muscle hardness in healthy volunteers. Methods Phantoms with known hardness ranging from 20 to 140 kPa were scanned with shear-wave sonoelastography, and inter- and intraoperator reliabilities were examined compared with strain sonoelastography. The relationships between the actual and measured hardness were analyzed. The masseter muscle hardness in 30 healthy volunteers was measured using shear-wave sonoelastography. Results: The inter- and intraoperator intraclass correlation coefficients were almost perfect. Strong correlations were seen between the actual and measured hardness. The mean hardness of the masseter muscles in healthy volunteers was 42.82 ± 5.56 kPa at rest and 53.36 ± 8.46 kPa during jaw clenching. Conclusions: The hardness measured with shear-wave sonoelastography showed high-level reliability. Shear-wave sonoelastography may be suitable for evaluation of the masseter muscles. PMID:26624000

  7. Shear-wave sonoelastography for assessing masseter muscle hardness in comparison with strain sonoelastography: study with phantoms and healthy volunteers.

    PubMed

    Ariji, Yoshiko; Nakayama, Miwa; Nishiyama, Wataru; Nozawa, Michihito; Ariji, Eiichiro

    2016-01-01

    Objectives Shear-wave sonoelastography is expected to facilitate low operator dependency, high reproducibility and quantitative evaluation, whereas there are few reports on available normative values of in vivo tissue in head and neck fields. The purpose of this study was to examine the reliabilities on measuring hardness using shear-wave sonoelastography and to clarify normal values of masseter muscle hardness in healthy volunteers. Methods Phantoms with known hardness ranging from 20 to 140 kPa were scanned with shear-wave sonoelastography, and inter- and intraoperator reliabilities were examined compared with strain sonoelastography. The relationships between the actual and measured hardness were analyzed. The masseter muscle hardness in 30 healthy volunteers was measured using shear-wave sonoelastography. The inter- and intraoperator intraclass correlation coefficients were almost perfect. Strong correlations were seen between the actual and measured hardness. The mean hardness of the masseter muscles in healthy volunteers was 42.82 ± 5.56 kPa at rest and 53.36 ± 8.46 kPa during jaw clenching. The hardness measured with shear-wave sonoelastography showed high-level reliability. Shear-wave sonoelastography may be suitable for evaluation of the masseter muscles.

  8. Quantitative metabolomics of the thermophilic methylotroph Bacillus methanolicus.

    PubMed

    Carnicer, Marc; Vieira, Gilles; Brautaset, Trygve; Portais, Jean-Charles; Heux, Stephanie

    2016-06-01

    The gram-positive bacterium Bacillus methanolicus MGA3 is a promising candidate for methanol-based biotechnologies. Accurate determination of intracellular metabolites is crucial for engineering this bacteria into an efficient microbial cell factory. Due to the diversity of chemical and cell properties, an experimental protocol validated on B. methanolicus is needed. Here a systematic evaluation of different techniques for establishing a reliable basis for metabolome investigations is presented. Metabolome analysis was focused on metabolites closely linked with B. methanolicus central methanol metabolism. As an alternative to cold solvent based procedures, a solvent-free quenching strategy using stainless steel beads cooled to -20 °C was assessed. The precision, the consistency of the measurements, and the extent of metabolite leakage from quenched cells were evaluated in procedures with and without cell separation. The most accurate and reliable performance was provided by the method without cell separation, as significant metabolite leakage occurred in the procedures based on fast filtration. As a biological test case, the best protocol was used to assess the metabolome of B. methanolicus grown in chemostat on methanol at two different growth rates and its validity was demonstrated. The presented protocol is a first and helpful step towards developing reliable metabolomics data for thermophilic methylotroph B. methanolicus. This will definitely help for designing an efficient methylotrophic cell factory.

  9. Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.

    2009-01-01

    We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791

  10. Evaluating the Reliability of Emergency Response Systems for Large-Scale Incident Operations

    PubMed Central

    Jackson, Brian A.; Faith, Kay Sullivan; Willis, Henry H.

    2012-01-01

    Abstract The ability to measure emergency preparedness—to predict the likely performance of emergency response systems in future events—is critical for policy analysis in homeland security. Yet it remains difficult to know how prepared a response system is to deal with large-scale incidents, whether it be a natural disaster, terrorist attack, or industrial or transportation accident. This research draws on the fields of systems analysis and engineering to apply the concept of system reliability to the evaluation of emergency response systems. The authors describe a method for modeling an emergency response system; identifying how individual parts of the system might fail; and assessing the likelihood of each failure and the severity of its effects on the overall response effort. The authors walk the reader through two applications of this method: a simplified example in which responders must deliver medical treatment to a certain number of people in a specified time window, and a more complex scenario involving the release of chlorine gas. The authors also describe an exploratory analysis in which they parsed a set of after-action reports describing real-world incidents, to demonstrate how this method can be used to quantitatively analyze data on past response performance. The authors conclude with a discussion of how this method of measuring emergency response system reliability could inform policy discussion of emergency preparedness, how system reliability might be improved, and the costs of doing so. PMID:28083267

  11. Reliability of spatial-temporal gait parameters during dual-task interference in people with multiple sclerosis. A cross-sectional study.

    PubMed

    Monticone, Marco; Ambrosini, Emilia; Fiorentini, Roberta; Rocca, Barbara; Liquori, Valentina; Pedrocchi, Alessandra; Ferrante, Simona

    2014-09-01

    To evaluate the reliability and minimum detectable change (MDC) of spatial-temporal gait parameters in subjects with multiple sclerosis (MS) during dual tasking. This cross-sectional study involved 25 healthy subjects (mean age 49.9 ± 15.8 years) and 25 people with MS (mean age 49.2 ± 11.5 years). Gait under motor-cognitive and motor-motor dual tasking conditions was evaluated in two sessions separated by a one-day interval using the GAITRite Walkway System. Test-retest reliability was assessed using intraclass correlation coefficients (ICCs), standard errors of measurement (SEM), and coefficients of variation (CV). MDC scores were computed for the velocity, cadence, step and stride length, step and stride time, double support time, the % of gait cycle for single support and stance phase, and base of support. All of the gait parameters reported good to excellent ICCs under both conditions, with healthy subject values of >0.69 and MS subject values of >0.84. SEM values were always below 18% for both groups of subjects. The gait patterns of the people with MS were slightly more variable than those of the normal controls (CVs: 5.88-41.53% vs 2.84-30.48%). The assessment of quantitative gait parameters in healthy subjects and people with MS is highly reliable under both of the investigated dual tasking conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  13. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    PubMed

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be applied.

  14. Reliability, precision, and measurement in the context of data from ability tests, surveys, and assessments

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.

    2010-07-01

    Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.

  15. The design and evaluation of psychometric properties for a questionnaire on elderly abuse by family caregivers among older adults on hemodialysis

    PubMed Central

    Mahmoudian, Amaneh; Torabi Chafjiri, Razieh; Alipour, Atefeh; Shamsalinia, Abbas; Ghaffari, Fatemeh

    2018-01-01

    Introduction Older adults with chronic disease are more vulnerable to abuse. Early and accurate detection of the elderly abuse phenomenon can help identify health-promoting solutions for the elderly, their family, and society. The purpose of this study was to design and evaluate the psychometric properties of a questionnaire on elderly abuse by family caregivers among older adults on hemodialysis. Methods Qualitative and quantitative research methodologies were used to develop the questionnaire. The item pool was compiled from literature reviews and the Delphi method. The literature reviews comprised 22 studies. The psychometric properties of the questionnaire were verified using face, content, and construct validity, and the reliability was tested using Cronbach’s alpha reliability. Results A 57-item questionnaire was developed after the psychometric evaluation. The Kaiser–Meyer–Olkin index and Bartlett’s test of sphericity showed reliable results. Seven components from the exploratory content analysis including psychological misbehavior, authority deprivation, physical misbehavior, financial misbehavior, being abandoned, caring neglect, and emotional misbehavior explained 74.769% of the total variance. Cronbach’s alpha was 0.98 and the interclass correlation coefficient was r=0.91 responding to the items twice (p<0.001), which shows a high level of tool stability. Conclusion This study developed a questionnaire to assess elderly abuse by family caregivers among older adults on hemodialysis. It is recommended as a mini scale that can be used both in statistical and practical studies, and that is valid and reliable. Nurses or other health care providers can use it in health centers, dialysis centers, or at the house of the patient. PMID:29670340

  16. 76 FR 55725 - Agency Information Collection Activities: Request for Comments for a New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-08

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. The FHWA received no comments in...

  17. Shuttle payload minimum cost vibroacoustic tests

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.

  18. FAA center for aviation systems reliability: an overview

    NASA Astrophysics Data System (ADS)

    Brasche, Lisa J. H.

    1996-11-01

    The FAA Center for Aviation Systems Reliability has as its objectives: to develop quantitative nondestructive evaluation (NDE) methods for aircraft structures and materials, including prototype instrumentation, software, techniques and procedures; and to develop and maintain comprehensive education and training programs specific to the inspection of aviation structures. The program, which includes contributions from Iowa State University, Northwestern University, Wayne State University, Tuskegee University, AlliedSignal Propulsion Engines, General Electric Aircraft Engines and Pratt and Whitney, has been in existence since 1990. Efforts under way include: development of inspection for adhesively bonded structures; detection of corrosion; development of advanced NDE concepts that form the basis for an inspection simulator; improvements of titanium inspection as part of the Engine Titanium Consortium; development of education and training program. An overview of the efforts underway will be provided with focus on those technologies closest to technology transfer.

  19. Development of an interprofessional lean facilitator assessment scale.

    PubMed

    Bravo-Sanchez, Cindy; Dorazio, Vincent; Denmark, Robert; Heuer, Albert J; Parrott, J Scott

    2018-05-01

    High reliability is important for optimising quality and safety in healthcare organisations. Reliability efforts include interprofessional collaborative practice (IPCP) and Lean quality/process improvement strategies, which require skilful facilitation. Currently, no validated Lean facilitator assessment tool for interprofessional collaboration exists. This article describes the development and pilot evaluation of such a tool; the Interprofessional Lean Facilitator Assessment Scale (ILFAS), which measures both technical and 'soft' skills, which have not been measured in other instruments. The ILFAS was developed using methodologies and principles from Lean/Shingo, IPCP, metacognition research and Bloom's Taxonomy of Learning Domains. A panel of experts confirmed the initial face validity of the instrument. Researchers independently assessed five facilitators, during six Lean sessions. Analysis included quantitative evaluation of rater agreement. Overall inter-rater agreement of the assessment of facilitator performance was high (92%), and discrepancies in the agreement statistics were analysed. Face and content validity were further established, and usability was evaluated, through primary stakeholder post-pilot feedback, uncovering minor concerns, leading to tool revision. The ILFAS appears comprehensive in the assessment of facilitator knowledge, skills, abilities, and may be useful in the discrimination between facilitators of different skill levels. Further study is needed to explore instrument performance and validity.

  20. Deviation-based spam-filtering method via stochastic approach

    NASA Astrophysics Data System (ADS)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  1. Projecting technology change to improve space technology planning and systems management

    NASA Astrophysics Data System (ADS)

    Walk, Steven Robert

    2011-04-01

    Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.

  2. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry.

    PubMed

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C 18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values ( R 2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to the quality control of eight batches of commercially available samples of Noni juiceThis method is simple, sensitive, reliable, accurate, and efficient method with strong specificity, good precision, and high recovery rate and provides a reliable basis for quality control of Noni juice. Abbreviations used: HPLC-ESI-MS/MS: High-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry, LOD: Limit of detection, LOQ: Limit of quantitation, S/N: Signal-to-noise ratio, RSD: Relative standard deviations, DP: Declustering potential, CE: Collision energy, MRM: Multiple reaction monitoring, RT: Retention time.

  3. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry

    PubMed Central

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Background: Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. Objective: To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. Materials and Methods: The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Results: Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values (R2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. Conclusions: The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. SUMMARY Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to the quality control of eight batches of commercially available samples of Noni juiceThis method is simple, sensitive, reliable, accurate, and efficient method with strong specificity, good precision, and high recovery rate and provides a reliable basis for quality control of Noni juice. Abbreviations used: HPLC-ESI-MS/MS: High-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry, LOD: Limit of detection, LOQ: Limit of quantitation, S/N: Signal-to-noise ratio, RSD: Relative standard deviations, DP: Declustering potential, CE: Collision energy, MRM: Multiple reaction monitoring, RT: Retention time. PMID:29576704

  4. Development and psychometric evaluation of the Undergraduate Clinical Education Environment Measure (UCEEM).

    PubMed

    Strand, Pia; Sjöborg, Karolina; Stalmeijer, Renée; Wichmann-Hansen, Gitte; Jakobsson, Ulf; Edgren, Gudrun

    2013-12-01

    There is a paucity of instruments designed to evaluate the multiple dimensions of the workplace as an educational environment for undergraduate medical students. The aim was to develop and psychometrically evaluate an instrument to measure how undergraduate medical students perceive the clinical workplace environment, based on workplace learning theories and empirical findings. Development of the instrument relied on established standards including theoretical and empirical grounding, systematic item development and expert review at various stages to ensure content validity. Qualitative and quantitative methods were employed using a series of steps from conceptualization through psychometric analysis of scores in a Swedish medical student population. The final result was a 25-item instrument with two overarching dimensions, experiential learning and social participation, and four subscales that coincided well with theory and empirical findings: Opportunities to learn in and through work & quality of supervision; Preparedness for student entry; Workplace interaction patterns & student inclusion; and Equal treatment. Evidence from various sources supported content validity, construct validity and reliability of the instrument. The Undergraduate Clinical Education Environment Measure represents a valid, reliable and feasible multidimensional instrument for evaluation of the clinical workplace as a learning environment for undergraduate medical students. Further validation in different populations using various psychometric methods is needed.

  5. Chemical Fingerprint Analysis and Quantitative Analysis of Rosa rugosa by UPLC-DAD.

    PubMed

    Mansur, Sanawar; Abdulla, Rahima; Ayupbec, Amatjan; Aisa, Haji Akbar

    2016-12-21

    A method based on ultra performance liquid chromatography with a diode array detector (UPLC-DAD) was developed for quantitative analysis of five active compounds and chemical fingerprint analysis of Rosa rugosa . Ten batches of R. rugosa collected from different plantations in the Xinjiang region of China were used to establish the fingerprint. The feasibility and advantages of the used UPLC fingerprint were verified for its similarity evaluation by systematically comparing chromatograms with professional analytical software recommended by State Food and Drug Administration (SFDA) of China. In quantitative analysis, the five compounds showed good regression (R² = 0.9995) within the test ranges, and the recovery of the method was in the range of 94.2%-103.8%. The similarities of liquid chromatography fingerprints of 10 batches of R. rugosa were more than 0.981. The developed UPLC fingerprint method is simple, reliable, and validated for the quality control and identification of R. rugosa . Additionally, simultaneous quantification of five major bioactive ingredients in the R. rugosa samples was conducted to interpret the consistency of the quality test. The results indicated that the UPLC fingerprint, as a characteristic distinguishing method combining similarity evaluation and quantification analysis, can be successfully used to assess the quality and to identify the authenticity of R. rugosa .

  6. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.

    PubMed

    Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T

    2013-12-01

    Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Performance of Lung Ultrasound in Detecting Peri-Operative Atelectasis after General Anesthesia.

    PubMed

    Yu, Xin; Zhai, Zhenping; Zhao, Yongfeng; Zhu, Zhiming; Tong, Jianbin; Yan, Jianqin; Ouyang, Wen

    2016-12-01

    The aim of this prospective observational study was to evaluate the performance of lung ultrasound (LUS) in detecting post-operative atelectasis in adult patients under general anesthesia. Forty-six patients without pulmonary comorbidities who were scheduled for elective neurosurgery were enrolled in the study. A total of 552 pairs of LUS clips and thoracic computed tomography (CT) images were ultimately analyzed to determine the presence of atelectasis in 12 prescribed lung regions. The accuracy of LUS in detecting peri-operative atelectasis was evaluated with thoracic CT as gold standard. Levels of agreement between the two observers for LUS and the two observers for thoracic CT were analyzed using the κ reliability test. The quantitative correlation between LUS scores of aeration and the volumetric data of atelectasis in thoracic CT were further evaluated. LUS had reliable performance in post-operative atelectasis, with a sensitivity of 87.7%, specificity of 92.1% and diagnostic accuracy of 90.8%. The levels of agreement between the two observers for LUS and for thoracic CT were both satisfactory, with κ coefficients of 0.87 (p < 0.0001) and 0.93 (p < 0.0001), respectively. In patients in the supine position, LUS scores were highly correlated with the atelectasis volume of CT (r = 0.58, p < 0.0001). Thus, LUS provides a fast, reliable and radiation-free method to identify peri-operative atelectasis in adults. Copyright © 2016. Published by Elsevier Inc.

  8. Geographical classification of Epimedium based on HPLC fingerprint analysis combined with multi-ingredients quantitative analysis.

    PubMed

    Xu, Ning; Zhou, Guofu; Li, Xiaojuan; Lu, Heng; Meng, Fanyun; Zhai, Huaqiang

    2017-05-01

    A reliable and comprehensive method for identifying the origin and assessing the quality of Epimedium has been developed. The method is based on analysis of HPLC fingerprints, combined with similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and multi-ingredient quantitative analysis. Nineteen batches of Epimedium, collected from different areas in the western regions of China, were used to establish the fingerprints and 18 peaks were selected for the analysis. Similarity analysis, HCA and PCA all classified the 19 areas into three groups. Simultaneous quantification of the five major bioactive ingredients in the Epimedium samples was also carried out to confirm the consistency of the quality tests. These methods were successfully used to identify the geographical origin of the Epimedium samples and to evaluate their quality. Copyright © 2016 John Wiley & Sons, Ltd.

  9. A quantitative risk-assessment system (QR-AS) evaluating operation safety of Organic Rankine Cycle using flammable mixture working fluid.

    PubMed

    Tian, Hua; Wang, Xueying; Shu, Gequn; Wu, Mingqiang; Yan, Nanhua; Ma, Xiaonan

    2017-09-15

    Mixture of hydrocarbon and carbon dioxide shows excellent cycle performance in Organic Rankine Cycle (ORC) used for engine waste heat recovery, but the unavoidable leakage in practical application is a threat for safety due to its flammability. In this work, a quantitative risk assessment system (QR-AS) is established aiming at providing a general method of risk assessment for flammable working fluid leakage. The QR-AS covers three main aspects: analysis of concentration distribution based on CFD simulations, explosive risk assessment based on the TNT equivalent method and risk mitigation based on evaluation results. A typical case of propane/carbon dioxide mixture leaking from ORC is investigated to illustrate the application of QR-AS. According to the assessment results, proper ventilation speed, safe mixture ratio and location of gas-detecting devices have been proposed to guarantee the security in case of leakage. The results revealed that this presented QR-AS was reliable for the practical application and the evaluation results could provide valuable guidance for the design of mitigation measures to improve the safe performance of ORC system. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Quantitative micro-CT based coronary artery profiling using interactive local thresholding and cylindrical coordinates.

    PubMed

    Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan

    2015-01-01

    Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.

  11. Selection of internal control genes for quantitative real-time RT-PCR studies during tomato development process

    PubMed Central

    Expósito-Rodríguez, Marino; Borges, Andrés A; Borges-Pérez, Andrés; Pérez, José A

    2008-01-01

    Background The elucidation of gene expression patterns leads to a better understanding of biological processes. Real-time quantitative RT-PCR has become the standard method for in-depth studies of gene expression. A biologically meaningful reporting of target mRNA quantities requires accurate and reliable normalization in order to identify real gene-specific variation. The purpose of normalization is to control several variables such as different amounts and quality of starting material, variable enzymatic efficiencies of retrotranscription from RNA to cDNA, or differences between tissues or cells in overall transcriptional activity. The validity of a housekeeping gene as endogenous control relies on the stability of its expression level across the sample panel being analysed. In the present report we describe the first systematic evaluation of potential internal controls during tomato development process to identify which are the most reliable for transcript quantification by real-time RT-PCR. Results In this study, we assess the expression stability of 7 traditional and 4 novel housekeeping genes in a set of 27 samples representing different tissues and organs of tomato plants at different developmental stages. First, we designed, tested and optimized amplification primers for real-time RT-PCR. Then, expression data from each candidate gene were evaluated with three complementary approaches based on different statistical procedures. Our analysis suggests that SGN-U314153 (CAC), SGN-U321250 (TIP41), SGN-U346908 ("Expressed") and SGN-U316474 (SAND) genes provide superior transcript normalization in tomato development studies. We recommend different combinations of these exceptionally stable housekeeping genes for suited normalization of different developmental series, including the complete tomato development process. Conclusion This work constitutes the first effort for the selection of optimal endogenous controls for quantitative real-time RT-PCR studies of gene expression during tomato development process. From our study a tool-kit of control genes emerges that outperform the traditional genes in terms of expression stability. PMID:19102748

  12. Four-point bending as a method for quantitatively evaluating spinal arthrodesis in a rat model.

    PubMed

    Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F

    2015-02-01

    The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague-Dawley rat spines after single-level posterolateral fusion procedures at L4-L5. Segments were classified as 'not fused,' 'restricted motion,' or 'fused' by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4-L5 motion segment, and stiffness was measured as the slope of the moment-displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery.

  13. Evaluation of cultured human dermal- and dermo-epidermal substitutes focusing on extracellular matrix components: Comparison of protein and RNA analysis.

    PubMed

    Oostendorp, Corien; Meyer, Sarah; Sobrio, Monia; van Arendonk, Joyce; Reichmann, Ernst; Daamen, Willeke F; van Kuppevelt, Toin H

    2017-05-01

    Treatment of full-thickness skin defects with split-thickness skin grafts is generally associated with contraction and scar formation and cellular skin substitutes have been developed to improve skin regeneration. The evaluation of cultured skin substitutes is generally based on qualitative parameters focusing on histology. In this study we focused on quantitative evaluation to provide a template for comparison of human bio-engineered skin substitutes between clinical and/or research centers, and to supplement histological data. We focused on extracellular matrix proteins since these components play an important role in skin regeneration. As a model we analyzed the human dermal substitute denovoDerm and the dermo-epidermal skin substitute denovoSkin. The quantification of the extracellular matrix proteins type III collagen and laminin 5 in tissue homogenates using western blotting analysis and ELISA was not successful. The same was true for assaying lysyl oxidase, an enzyme involved in crosslinking of matrix molecules. As an alternative, gene expression levels were measured using qPCR. Various RNA isolation procedures were probed. The gene expression profile for specific dermal and epidermal genes could be measured reliably and reproducibly. Differences caused by changes in the cell culture conditions could easily be detected. The number of cells in the skin substitutes was measured using the PicoGreen dsDNA assay, which was found highly quantitative and reproducible. The (dis) advantages of assays used for quantitative evaluation of skin substitutes are discussed. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  14. Relative quantitation of glycosylation variants by stable isotope labeling of enzymatically released N-glycans using [12C]/[13C] aniline and ZIC-HILIC-ESI-TOF-MS.

    PubMed

    Giménez, Estela; Sanz-Nebot, Victòria; Rizzi, Andreas

    2013-09-01

    Glycan reductive isotope labeling (GRIL) using [(12)C]- and [(13)C]-coded aniline was used for relative quantitation of N-glycans. In a first step, the labeling method by reductive amination was optimized for this reagent. It could be demonstrated that selecting aniline as limiting reactant and using the reductant in excess is critical for achieving high derivatization yields (over 95 %) and good reproducibility (relative standard deviations ∼1-5 % for major and ∼5-10 % for minor N-glycans). In a second step, zwitterionic-hydrophilic interaction liquid chromatography in capillary columns coupled to electrospray mass spectrometry with time-of-flight analyzer (μZIC-HILIC-ESI-TOF-MS) was applied for the analysis of labeled N-glycans released from intact glycoproteins. Ovalbumin, bovine α1-acid-glycoprotein and bovine fetuin were used as test glycoproteins to establish and evaluate the methodology. Excellent separation of isomeric N-glycans and reproducible quantitation via the extracted ion chromatograms indicate a great potential of the proposed methodology for glycoproteomic analysis and for reliable relative quantitation of glycosylation variants in biological samples.

  15. Quantitative Determination of Citric and Ascorbic Acid in Powdered Drink Mixes

    ERIC Educational Resources Information Center

    Sigmann, Samuella B.; Wheeler, Dale E.

    2004-01-01

    A procedure by which the reactions are used to quantitatively determine the amount of total acid, the amount of total ascorbic acid and the amount of citric acid in a given sample of powdered drink mix, are described. A safe, reliable and low-cost quantitative method to analyze consumer product for acid content is provided.

  16. Severity of illness index for surgical departments in a Cuban hospital: a revalidation study.

    PubMed

    Armas-Bencomo, Amadys; Tamargo-Barbeito, Teddy Osmin; Fuentes-Valdés, Edelberto; Jiménez-Paneque, Rosa Eugenia

    2017-03-08

    In the context of the evaluation of hospital services, the incorporation of severity indices allows an essential control variable for performance comparisons in time and space through risk adjustment. The severity index for surgical services was developed in 1999 and validated as a general index for surgical services. Sixteen years later the hospital context is different in many ways and a revalidation was considered necessary to guarantee its current usefulness. To evaluate the validity and reliability of the surgical services severity index to warrant its reasonable use under current conditions. A descriptive study was carried out in the General Surgery service of the "Hermanos Ameijeiras" Clinical Surgical Hospital of Havana, Cuba during the second half of 2010. We reviewed the medical records of 511 patients discharged from this service. Items were the same as the original index as were their weighted values. Conceptual or construct validity, criterion validity and inter-rater reliability as well as internal consistency of the proposed index were evaluated. Construct validity was expressed as a significant association between the value of the severity index for surgical services and discharge status. A significant association was also found, although weak, with length of hospital stay. Criterion validity was demonstrated through the correlations between the severity index for surgical services and other similar indices. Regarding criterion validity, the Horn index showed a correlation of 0.722 (95% CI: 0.677-0.761) with our index. With the POSSUM score, correlation was 0.454 (95% CI: 0.388-0.514) with mortality risk and 0.539 (95% CI: 0.462-0.607) with morbidity risk. Internal consistency yielded a standardized Cronbach's alpha of 0.8; inter-rater reliability resulted in a reliability coefficient of 0.98 for the quantitative index and a weighted global Kappa coefficient of 0.87 for the ordinal surgical index of severity for surgical services (IGQ). The validity and reliability of the proposed index was satisfactory in all aspects evaluated. The surgical services severity index may be used in the original context and is easily adaptable to other contexts as well.

  17. Developing evaluation scales for horticultural therapy.

    PubMed

    Im, Eun-Ae; Park, Sin-Ae; Son, Ki-Cheol

    2018-04-01

    This study developed evaluation scales for measuring the effects of horticultural therapy in practical settings. Qualitative and quantitative research, including three preliminary studies and a main study, were conducted. In the first study, a total of 779 horticultural therapists answered an open-end questionnaire based on 58 items about elements of occupational therapy and seven factors about singularity of horticultural therapy. In the second study, 20 horticultural therapists participated in in-depth interviews. In the third study, a Delphi method was conducted with 24 horticultural therapists to build a model of assessment indexes and ensure the validity. In the final study, the reserve scales were tested by 121 horticultural therapists in their practical settings for 1045 clients, to verify their reliability and validity. Preliminary questions in the effects area of horticultural therapy were developed in the first study, and validity for the components in the second study. In the third study, an expert Delphi survey was conducted as part of content validity verification of the preliminary tool of horticultural therapy for physical, cognitive, psychological-emotional, and social areas. In the final study, the evaluation tool, which verified the construct, convergence, discriminant, and predictive validity and reliability test, was used to finalise the evaluation tool. The effects of horticultural therapy were classified as four different aspects, namely, physical, cognitive, psycho-emotional, and social, based on previous studies on the effects of horticultural therapy. 98 questions in the four aspects were selected as reserve scales. The reliability of each scale was calculated as 0.982 in physical, 0.980 in cognitive, 0.965 in psycho-emotional, and 0.972 in social aspects based on the Cronbach's test of intra-item internal consistency and half reliability of Spearman-Brown. This study was the first to demonstrate validity and reliability by simultaneously developing four measures of horticultural therapy effectiveness, namely, physical, cognitive, psychological-emotional, and social, both locally and externally. It is especially worthwhile in that it can be applied in common to people. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Quantitative PCR for Genetic Markers of Human Fecal Pollution

    EPA Science Inventory

    Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires reliable host-specific analytical methods and a rapid quantificationapproach. We report the development of quantitative PCR assays for quantification of two recently described human-...

  19. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging.

    PubMed

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M

    2008-01-01

    To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.

  20. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582

  1. Semiquantitative Evaluation of Extrasynovial Soft Tissue Inflammation in the Shoulders of Patients with Polymyalgia Rheumatica and Elderly-Onset Rheumatoid Arthritis by Power Doppler Ultrasound.

    PubMed

    Suzuki, Takeshi; Yoshida, Ryochi; Okamoto, Akiko; Seri, Yu

    2017-01-01

    Objectives . To develop a scoring system for evaluating the extrasynovial soft tissue inflammation of the shoulders in patients with polymyalgia rheumatica (PMR) and elderly-onset rheumatoid arthritis with PMR-like onset (pm-EORA) using ultrasound. Methods . We analyzed stored power Doppler (PD) images obtained by the pretreatment examination of 15 PMR patients and 15 pm-EORA patients. A semiquantitative scoring system for evaluating the severity of PD signals adjacent to the anterior aspect of the subscapularis tendon was designed. Results . A four-point scale scoring for the hyperemia on the subscapularis tendon was proposed as follows in brief: 0 = absent or minimal flow, 1 = single vessel dots or short linear-shape signals, 2 = long linear-shape signals or short zone-shape signals, or 3 = long zone-shape signals. This scoring system showed good intra- and interobserver reliability and good correlation to quantitative pixel-counting evaluation. By using it, we demonstrated that inflammation in PMR is dominantly localized in extrasynovial soft tissue as compared with pm-EORA. Conclusions . We proposed a reliable semiquantitative scoring system using ultrasound for the evaluation of extrasynovial soft tissue inflammation of the shoulders in patients with both PMR and pm-EORA. This system is simple to use and can be utilized in future investigations.

  2. An adaptive model approach for quantitative wrist rigidity evaluation during deep brain stimulation surgery.

    PubMed

    Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo

    2016-08-01

    Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.

  3. Evaluation of reference genes for gene expression studies in radish (Raphanus sativus L.) using quantitative real-time PCR.

    PubMed

    Xu, Yuanyuan; Zhu, Xianwen; Gong, Yiqin; Xu, Liang; Wang, Yan; Liu, Liwang

    2012-08-03

    Real-time quantitative reverse transcription PCR (RT-qPCR) is a rapid and reliable method for gene expression studies. Normalization based on reference genes can increase the reliability of this technique; however, recent studies have shown that almost no single reference gene is universal for all possible experimental conditions. In this study, eight frequently used reference genes were investigated, including Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), Actin2/7 (ACT), Tubulin alpha-5 (TUA), Tubulin beta-1 (TUB), 18S ribosomal RNA (18SrRNA), RNA polymerase-II transcription factor (RPII), Elongation factor 1-b (EF-1b) and Translation elongation factor 2 (TEF2). Expression stability of candidate reference genes was examined across 27 radish samples, representing a range of tissue types, cultivars, photoperiodic and vernalization treatments, and developmental stages. The eight genes in these sample pools displayed a wide range of Ct values and were variably expressed. Two statistical software packages, geNorm and NormFinder showed that TEF2, RPII and ACT appeared to be relatively stable and therefore the most suitable for use as reference genes. These results facilitate selection of desirable reference genes for accurate gene expression studies in radish. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Detection of proximal caries using quantitative light-induced fluorescence-digital and laser fluorescence: a comparative study.

    PubMed

    Yoon, Hyung-In; Yoo, Min-Jeong; Park, Eun-Jin

    2017-12-01

    The purpose of this study was to evaluate the in vitro validity of quantitative light-induced fluorescence-digital (QLF-D) and laser fluorescence (DIAGNOdent) for assessing proximal caries in extracted premolars, using digital radiography as reference method. A total of 102 extracted premolars with similar lengths and shapes were used. A single operator conducted all the examinations using three different detection methods (bitewing radiography, QLF-D, and DIAGNOdent). The bitewing x-ray scale, QLF-D fluorescence loss (ΔF), and DIAGNOdent peak readings were compared and statistically analyzed. Each method showed an excellent reliability. The correlation coefficient between bitewing radiography and QLF-D, DIAGNOdent were -0.644 and 0.448, respectively, while the value between QLF-D and DIAGNOdent was -0.382. The kappa statistics for bitewing radiography and QLF-D had a higher diagnosis consensus than those for bitewing radiography and DIAGNOdent. The QLF-D was moderately to highly accurate (AUC = 0.753 - 0.908), while DIAGNOdent was moderately to less accurate (AUC = 0.622 - 0.784). All detection methods showed statistically significant correlation and high correlation between the bitewing radiography and QLF-D. QLF-D was found to be a valid and reliable alternative diagnostic method to digital bitewing radiography for in vitro detection of proximal caries.

  5. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.

  6. Development and validation of a Semi-quantitative food frequency questionnaire among older people in north of Iran

    PubMed Central

    Bijani, Ali; Esmaili, Haleh; Ghadimi, Reza; Babazadeh, Atekeh; Rezaei, Reyhaneh; G Cumming, Robert; Hosseini, Seyed Reza

    2018-01-01

    Background: The study was conducted to assess reliability of modified semi-quantitative food frequency questionnaire (SQFFQ) as a part of the Amirkola Health and Aging Project (AHAP). Methods: The study was carried out in a sample of 200 men and women aged 60 years and older. A 138-item SQFFQ and two 24-hour dietary recalls were completed. The reliability of SQFFQ was evaluated by comparing eighteen food groups, energy and nutrient intakes derived from both methods using Spearman and Pearson’s correlation coefficients for food groups and nutrients, respectively. Bland-Altman plots and Pitman’s tests were applied to compare the two dietary assessment methods. Results: The mean (SD) age of subjects was 68.16 (6.56) years. The average energy intake from 24-hour dietary recalls and the SQFFQ were 1470.2 and 1535.4 kcal/day, respectively. Spearman correlation coefficients, comparing food groups intake based on two dietary assessment methods ranged from 0.25 (meat) to 0.62 (tea and coffee) in men and from 0.39 (whole grains) to 0.60 (sugars) in women. Pearson correlation coefficients for energy and macronutrients were 0.53 for energy to 0.21 for zinc in male and 0.71 for energy to 0.26 for vitamin C in females. The Pitman’s test reflected the reasonable agreement between the mean energy and macronutrients of the SQFFQ and 24-hour recalls. Conclusions: The modified SQFFQ that was designed for the AHAP was found to be reliable for assessing the intake of several food groups, energy, micro-and macronutrients. PMID:29387324

  7. Utilization of wireless structural health monitoring as decision making tools for a condition and reliability-based assessment of railroad bridges

    NASA Astrophysics Data System (ADS)

    Flanigan, Katherine A.; Johnson, Nephi R.; Hou, Rui; Ettouney, Mohammed; Lynch, Jerome P.

    2017-04-01

    The ability to quantitatively assess the condition of railroad bridges facilitates objective evaluation of their robustness in the face of hazard events. Of particular importance is the need to assess the condition of railroad bridges in networks that are exposed to multiple hazards. Data collected from structural health monitoring (SHM) can be used to better maintain a structure by prompting preventative (rather than reactive) maintenance strategies and supplying quantitative information to aid in recovery. To that end, a wireless monitoring system is validated and installed on the Harahan Bridge which is a hundred-year-old long-span railroad truss bridge that crosses the Mississippi River near Memphis, TN. This bridge is exposed to multiple hazards including scour, vehicle/barge impact, seismic activity, and aging. The instrumented sensing system targets non-redundant structural components and areas of the truss and floor system that bridge managers are most concerned about based on previous inspections and structural analysis. This paper details the monitoring system and the analytical method for the assessment of bridge condition based on automated data-driven analyses. Two primary objectives of monitoring the system performance are discussed: 1) monitoring fatigue accumulation in critical tensile truss elements; and 2) monitoring the reliability index values associated with sub-system limit states of these members. Moreover, since the reliability index is a scalar indicator of the safety of components, quantifiable condition assessment can be used as an objective metric so that bridge owners can make informed damage mitigation strategies and optimize resource management on single bridge or network levels.

  8. Checklist and Scoring System for the Assessment of Soft Tissue Preservation in CT Examinations of Human Mummies.

    PubMed

    Panzer, Stephanie; Mc Coy, Mark R; Hitzl, Wolfgang; Piombino-Mascali, Dario; Jankauskas, Rimantas; Zink, Albert R; Augat, Peter

    2015-01-01

    The purpose of this study was to develop a checklist for standardized assessment of soft tissue preservation in human mummies based on whole-body computed tomography examinations, and to add a scoring system to facilitate quantitative comparison of mummies. Computed tomography examinations of 23 mummies from the Capuchin Catacombs of Palermo, Sicily (17 adults, 6 children; 17 anthropogenically and 6 naturally mummified) and 7 mummies from the crypt of the Dominican Church of the Holy Spirit of Vilnius, Lithuania (5 adults, 2 children; all naturally mummified) were used to develop the checklist following previously published guidelines. The scoring system was developed by assigning equal scores for checkpoints with equivalent quality. The checklist was evaluated by intra- and inter-observer reliability. The finalized checklist was applied to compare the groups of anthropogenically and naturally mummified bodies. The finalized checklist contains 97 checkpoints and was divided into two main categories, "A. Soft Tissues of Head and Musculoskeletal System" and "B. Organs and Organ Systems", each including various subcategories. The complete checklist had an intra-observer reliability of 98% and an inter-observer reliability of 93%. Statistical comparison revealed significantly higher values in anthropogenically compared to naturally mummified bodies for the total score and for three subcategories. In conclusion, the developed checklist allows for a standardized assessment and documentation of soft tissue preservation in whole-body computed tomography examinations of human mummies. The scoring system facilitates a quantitative comparison of the soft tissue preservation status between single mummies or mummy collections.

  9. Reliability and validity of quantifying absolute muscle hardness using ultrasound elastography.

    PubMed

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young's moduli of seven tissue-mimicking materials (in vitro; Young's modulus range, 20-80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young's modulus ratio of two reference materials, one hard and one soft (Young's moduli of 7 and 30 kPa, respectively), the Young's moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young's moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young's moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified.

  10. Reliability and Validity of Quantifying Absolute Muscle Hardness Using Ultrasound Elastography

    PubMed Central

    Chino, Kentaro; Akagi, Ryota; Dohi, Michiko; Fukashiro, Senshi; Takahashi, Hideyuki

    2012-01-01

    Muscle hardness is a mechanical property that represents transverse muscle stiffness. A quantitative method that uses ultrasound elastography for quantifying absolute human muscle hardness has been previously devised; however, its reliability and validity have not been completely verified. This study aimed to verify the reliability and validity of this quantitative method. The Young’s moduli of seven tissue-mimicking materials (in vitro; Young’s modulus range, 20–80 kPa; increments of 10 kPa) and the human medial gastrocnemius muscle (in vivo) were quantified using ultrasound elastography. On the basis of the strain/Young’s modulus ratio of two reference materials, one hard and one soft (Young’s moduli of 7 and 30 kPa, respectively), the Young’s moduli of the tissue-mimicking materials and medial gastrocnemius muscle were calculated. The intra- and inter-investigator reliability of the method was confirmed on the basis of acceptably low coefficient of variations (≤6.9%) and substantially high intraclass correlation coefficients (≥0.77) obtained from all measurements. The correlation coefficient between the Young’s moduli of the tissue-mimicking materials obtained using a mechanical method and ultrasound elastography was 0.996, which was equivalent to values previously obtained using magnetic resonance elastography. The Young’s moduli of the medial gastrocnemius muscle obtained using ultrasound elastography were within the range of values previously obtained using magnetic resonance elastography. The reliability and validity of the quantitative method for measuring absolute muscle hardness using ultrasound elastography were thus verified. PMID:23029231

  11. An illustrative overview of semi-quantitative MRI scoring of knee osteoarthritis: lessons learned from longitudinal observational studies.

    PubMed

    Roemer, F W; Hunter, D J; Crema, M D; Kwoh, C K; Ochoa-Albiztegui, E; Guermazi, A

    2016-02-01

    To introduce the most popular magnetic resonance imaging (MRI) osteoarthritis (OA) semi-quantitative (SQ) scoring systems to a broader audience with a focus on the most commonly applied scores, i.e., the MOAKS and WORMS system and illustrate similarities and differences. While the main structure and methodology of each scoring system are publicly available, the core of this overview will be an illustrative imaging atlas section including image examples from multiple OA studies applying MRI in regard to different features assessed, show specific examples of different grades and point out pitfalls and specifics of SQ assessment including artifacts, blinding to time point of acquisition and within-grade evaluation. Similarities and differences between different scoring systems are presented. Technical considerations are followed by a brief description of the most commonly utilized SQ scoring systems including their responsiveness and reliability. The second part is comprised of the atlas section presenting illustrative image examples. Evidence suggests that SQ assessment of OA by expert MRI readers is valid, reliable and responsive, which helps investigators to understand the natural history of this complex disease and to evaluate potential new drugs in OA clinical trials. Researchers have to be aware of the differences and specifics of the different systems to be able to engage in imaging assessment and interpretation of imaging-based data. SQ scoring has enabled us to explain associations of structural tissue damage with clinical manifestations of the disease and with morphological alterations thought to represent disease progression. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  12. An illustrative overview of semi-quantitative MRI scoring of knee osteoarthritis: Lessons learned from longitudinal observational studies

    PubMed Central

    Roemer, Frank W.; Hunter, David J.; Crema, Michel D.; Kwoh, C. Kent; Ochoa-Albiztegui, Elena; Guermazi, Ali

    2015-01-01

    Objective To introduce the most popular magnetic resonance imaging (MRI) osteoarthritis (OA) semi-quantitative (SQ) scoring systems to a broader audience with a focus on the most commonly applied scores, i.e. the MOAKS and WORMS system and illustrate similarities and differences. Design While the main structure and methodology of each scoring system are publicly available, the core of this overview will be an illustrative imaging atlas section including image examples from multiple osteoarthritis studies applying MRI in regard to different features assessed, show specific examples of different grades and point out pitfalls and specifics of SQ assessment including artifacts, blinding to time point of acquisition and within-grade evaluation. Results Similarities and differences between different scoring systems are presented. Technical considerations are followed by a brief description of the most commonly utilized SQ scoring systems including their responsiveness and reliability. The second part is comprised of the atlas section presenting illustrative image examples. Conclusions Evidence suggests that SQ assessment of OA by expert MRI readers is valid, reliable and responsive, which helps investigators to understand the natural history of this complex disease and to evaluate potential new drugs in OA clinical trials. Researchers have to be aware of the differences and specifics of the different systems to be able to engage in imaging assessment and interpretation of imaging-based data. SQ scoring has enabled us to explain associations of structural tissue damage with clinical manifestations of the disease and with morphological alterations thought to represent disease progression. PMID:26318656

  13. SU-E-J-12: A New Stereological Method for Tumor Volume Evaluation for Esophageal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y; Tianjin Medical University Cancer Institute and Hospital; East Carolina University

    2014-06-01

    Purpose: Stereological method used to obtain three dimensional quantitative information from two dimensional images is a widely used tool in the study of cells and pathology. But the feasibility of the method for quantitative evaluation of volumes with 3D image data sets for radiotherapy clinical application has not been explored. On the other hand, a quick, easy-to-use and reliable method is highly desired in image-guided-radiotherapy(IGRT) for tumor volume measurement for the assessment of response to treatment. To meet this need, a stereological method for evaluating tumor volumes for esophageal cancer is presented in this abstract. Methods: The stereology method wasmore » optimized by selecting the appropriate grid point distances and sample types. 7 patients with esophageal cancer were selected retrospectively for this study, each having pre and post treatment computed tomography (CT) scans. Stereological measurements were performed for evaluating the gross tumor volume (GTV) changes after radiotherapy and the results was compared with the ones by planimetric measurements. Two independent observers evaluated the reproducibility for volume measurement using the new stereological technique. Results: The intraobserver variation in the GTV volume estimation was 3.42±1.68cm3 (the Wilcoxon matched-pairs test Resultwas Z=−1.726,P=0.084>0.05); the interobserver variation in the GTV volume estimation was 22.40±7.23 cm3 (Z=−3.296,P=0.083>0.05), which showed the consistency in GTV volume calculation with the new method for the same and different users. The agreement level between the results from the two techniques was also evaluated. Difference between the measured GTVs was 20.10±5.35 cm3 (Z=−3.101,P=0.089>0.05). Variation of the measurement results using the two techniques was low and clinically acceptable. Conclusion: The good agreement between stereological and planimetric techniques proves the reliability of the stereological tumor volume estimations. The optimized stereological technique described in this abstract may provide a quick, unbiased and reproducible tool for tumor volume estimation for treatment response assessment. Supported by NSFC (#81041107, #81171342 and #31000784)« less

  14. Development of an Electromechanical Grade to Assess Human Knee Articular Cartilage Quality.

    PubMed

    Sim, Sotcheadt; Hadjab, Insaf; Garon, Martin; Quenneville, Eric; Lavigne, Patrick; Buschmann, Michael D

    2017-10-01

    Quantitative assessments of articular cartilage function are needed to aid clinical decision making. Our objectives were to develop a new electromechanical grade to assess quantitatively cartilage quality and test its reliability. Electromechanical properties were measured using a hand-held electromechanical probe on 200 human articular surfaces from cadaveric donors and osteoarthritic patients. These data were used to create a reference electromechanical property database and to compare with visual arthroscopic International Cartilage Repair Society (ICRS) grading of cartilage degradation. The effect of patient-specific and location-specific characteristics on electromechanical properties was investigated to construct a continuous and quantitative electromechanical grade analogous to ICRS grade. The reliability of this novel grade was assessed by comparing it with ICRS grades on 37 human articular surfaces. Electromechanical properties were not affected by patient-specific characteristics for each ICRS grade, but were significantly different across the articular surface. Electromechanical properties varied linearly with ICRS grade, leading to a simple linear transformation from one scale to the other. The electromechanical grade correlated strongly with ICRS grade (r = 0.92, p < 0.0001). Additionally, the electromechanical grade detected lesions that were not found visually. This novel grade can assist the surgeon in assessing human knee cartilage by providing a quantitative and reliable grading system.

  15. THE 6-MINUTE WALK TEST AND OTHER CLINICAL ENDPOINTS IN DUCHENNE MUSCULAR DYSTROPHY: RELIABILITY, CONCURRENT VALIDITY, AND MINIMAL CLINICALLY IMPORTANT DIFFERENCES FROM A MULTICENTER STUDY

    PubMed Central

    McDonald, Craig M; Henricson, Erik K; Abresch, R Ted; Florence, Julaine; Eagle, Michelle; Gappmaier, Eduard; Glanzman, Allan M; Spiegel, Robert; Barth, Jay; Elfring, Gary; Reha, Allen; Peltz, Stuart W

    2013-01-01

    Introduction: An international clinical trial enrolled 174 ambulatory males ≥5 years old with nonsense mutation Duchenne muscular dystrophy (nmDMD). Pretreatment data provide insight into reliability, concurrent validity, and minimal clinically important differences (MCIDs) of the 6-minute walk test (6MWT) and other endpoints. Methods: Screening and baseline evaluations included the 6-minute walk distance (6MWD), timed function tests (TFTs), quantitative strength by myometry, the PedsQL, heart rate–determined energy expenditure index, and other exploratory endpoints. Results: The 6MWT proved feasible and reliable in a multicenter context. Concurrent validity with other endpoints was excellent. The MCID for 6MWD was 28.5 and 31.7 meters based on 2 statistical distribution methods. Conclusions: The ratio of MCID to baseline mean is lower for 6MWD than for other endpoints. The 6MWD is an optimal primary endpoint for Duchenne muscular dystrophy (DMD) clinical trials that are focused therapeutically on preservation of ambulation and slowing of disease progression. Muscle Nerve 48: 357–368, 2013 PMID:23674289

  16. Quantitative T2 mapping evaluation for articular cartilage lesions in a rabbit model of anterior cruciate ligament transection osteoarthritis.

    PubMed

    Wei, Zheng-mao; Du, Xiang-ke; Huo, Tian-long; Li, Xu-bin; Quan, Guang-nan; Li, Tian-ran; Cheng, Jin; Zhang, Wei-tao

    2012-03-01

    Quantitative T2 mapping has been a widely used method for the evaluation of pathological cartilage properties, and the histological assessment system of osteoarthritis in the rabbit has been published recently. The aim of the study was to investigate the effectiveness of quantitative T2 mapping evaluation for articular cartilage lesions of a rabbit model of anterior cruciate ligament transection (ACLT) osteoarthritis. Twenty New Zealand White (NZW) rabbits were divided into ACLT surgical group and sham operated group equally. The anterior cruciate ligaments of the rabbits in ACLT group were transected, while the joints were closed intactly in sham operated group. Magnetic resonance (MR) examinations were performed on 3.0T MR unit at week 0, week 6, and week 12. T2 values were computed on GE ADW4.3 workstation. All rabbits were killed at week 13, and left knees were stained with Haematoxylin and Eosin. Semiquantitative histological grading was obtained according to the osteoarthritis cartilage histopathology assessment system. Computerized image analysis was performed to quantitate the immunostained collagen type II. The average MR T2 value of whole left knee cartilage in ACLT surgical group ((29.05±12.01) ms) was significantly higher than that in sham operated group ((24.52±7.97) ms) (P=0.024) at week 6. The average T2 value increased to (32.18±12.79) ms in ACLT group at week 12, but remained near the baseline level ((27.66±8.08) ms) in the sham operated group (P=0.03). The cartilage lesion level of left knee in ACLT group was significantly increased at week 6 (P=0.005) and week 12 (P<0.001). T2 values had positive correlation with histological grading scores, but inverse correlation with optical densities (OD) of type II collagen. This study demonstrated the reliability and practicability of quantitative T2 mapping for the cartilage injury of rabbit ACLT osteoarthritis model.

  17. Intersession reliability of fMRI activation for heat pain and motor tasks

    PubMed Central

    Quiton, Raimi L.; Keaser, Michael L.; Zhuo, Jiachen; Gullapalli, Rao P.; Greenspan, Joel D.

    2014-01-01

    As the practice of conducting longitudinal fMRI studies to assess mechanisms of pain-reducing interventions becomes more common, there is a great need to assess the test–retest reliability of the pain-related BOLD fMRI signal across repeated sessions. This study quantitatively evaluated the reliability of heat pain-related BOLD fMRI brain responses in healthy volunteers across 3 sessions conducted on separate days using two measures: (1) intraclass correlation coefficients (ICC) calculated based on signal amplitude and (2) spatial overlap. The ICC analysis of pain-related BOLD fMRI responses showed fair-to-moderate intersession reliability in brain areas regarded as part of the cortical pain network. Areas with the highest intersession reliability based on the ICC analysis included the anterior midcingulate cortex, anterior insula, and second somatosensory cortex. Areas with the lowest intersession reliability based on the ICC analysis also showed low spatial reliability; these regions included pregenual anterior cingulate cortex, primary somatosensory cortex, and posterior insula. Thus, this study found regional differences in pain-related BOLD fMRI response reliability, which may provide useful information to guide longitudinal pain studies. A simple motor task (finger-thumb opposition) was performed by the same subjects in the same sessions as the painful heat stimuli were delivered. Intersession reliability of fMRI activation in cortical motor areas was comparable to previously published findings for both spatial overlap and ICC measures, providing support for the validity of the analytical approach used to assess intersession reliability of pain-related fMRI activation. A secondary finding of this study is that the use of standard ICC alone as a measure of reliability may not be sufficient, as the underlying variance structure of an fMRI dataset can result in inappropriately high ICC values; a method to eliminate these false positive results was used in this study and is recommended for future studies of test–retest reliability. PMID:25161897

  18. Evaluation of RNA extraction methods and identification of putative reference genes for real-time quantitative polymerase chain reaction expression studies on olive (Olea europaea L.) fruits.

    PubMed

    Nonis, Alberto; Vezzaro, Alice; Ruperti, Benedetto

    2012-07-11

    Genome wide transcriptomic surveys together with targeted molecular studies are uncovering an ever increasing number of differentially expressed genes in relation to agriculturally relevant processes in olive (Olea europaea L). These data need to be supported by quantitative approaches enabling the precise estimation of transcript abundance. qPCR being the most widely adopted technique for mRNA quantification, preliminary work needs to be done to set up robust methods for extraction of fully functional RNA and for the identification of the best reference genes to obtain reliable quantification of transcripts. In this work, we have assessed different methods for their suitability for RNA extraction from olive fruits and leaves and we have evaluated thirteen potential candidate reference genes on 21 RNA samples belonging to fruit developmental/ripening series and to leaves subjected to wounding. By using two different algorithms, GAPDH2 and PP2A1 were identified as the best reference genes for olive fruit development and ripening, and their effectiveness for normalization of expression of two ripening marker genes was demonstrated.

  19. Development of a quality instrument for assessing the spontaneous reports of ADR/ADE using Delphi method in China.

    PubMed

    Chen, Lixun; Jiang, Ling; Shen, Aizong; Wei, Wei

    2016-09-01

    The frequently low quality of submitted spontaneous reports is of an increasing concern; to our knowledge, no validated instrument exists for assessing case reports' quality comprehensively enough. This work was conducted to develop such a quality instrument for assessing the spontaneous reports of adverse drug reaction (ADR)/adverse drug event (ADE) in China. Initial evaluation indicators were generated using systematic and literature data analysis. Final indicators and their weights were identified using Delphi method. The final quality instrument was developed by adopting the synthetic scoring method. A consensus was reached after four rounds of Delphi survey. The developed quality instrument consisted of 6 first-rank indicators, 18 second-rank indicators, and 115 third-rank indicators, and each rank indicator has been weighted. It evaluates the quality of spontaneous reports of ADR/ADE comprehensively and quantitatively on six parameters: authenticity, duplication, regulatory, completeness, vigilance level, and reporting time frame. The developed instrument was tested with good reliability and validity, which can be used to comprehensively and quantitatively assess the submitted spontaneous reports of ADR/ADE in China.

  20. Evaluation of viral removal by nanofiltration using real-time quantitative polymerase chain reaction.

    PubMed

    Zhao, Xiaowen; Bailey, Mark R; Emery, Warren R; Lambooy, Peter K; Chen, Dayue

    2007-06-01

    Nanofiltration is commonly introduced into purification processes of biologics produced in mammalian cells to serve as a designated step for removal of potential exogenous viral contaminants and endogenous retrovirus-like particles. The LRV (log reduction value) achieved by nanofiltration is often determined by cell-based infectivity assay, which is time-consuming and labour-intensive. We have explored the possibility of employing QPCR (quantitative PCR) to evaluate LRV achieved by nanofiltration in scaled-down studies using two model viruses, namely xenotropic murine leukemia virus and murine minute virus. We report here the successful development of a QPCR-based method suitable for quantification of virus removal by nanofiltration. The method includes a nuclease treatment step to remove free viral nucleic acids, while viral genome associated with intact virus particles is shielded from the nuclease. In addition, HIV Armored RNA was included as an internal control to ensure the accuracy and reliability of the method. The QPCRbased method described here provides several advantages such as better sensitivity, faster turnaround time, reduced cost and higher throughput over the traditional cell-based infectivity assays.

  1. KIN-Nav navigation system for kinematic assessment in anterior cruciate ligament reconstruction: features, use, and perspectives.

    PubMed

    Martelli, S; Zaffagnini, S; Bignozzi, S; Lopomo, N F; Iacono, F; Marcacci, M

    2007-10-01

    In this paper a new navigation system, KIN-Nav, developed for research and used during 80 anterior cruciate ligament (ACL) reconstructions is described. KIN-Nav is a user-friendly navigation system for flexible intraoperative acquisitions of anatomical and kinematic data, suitable for validation of biomechanical hypotheses. It performs real-time quantitative evaluation of antero-posterior, internal-external, and varus-valgus knee laxity at any degree of flexion and provides a new interface for this task, suitable also for comparison of pre-operative and post-operative knee laxity and surgical documentation. In this paper the concept and features of KIN-Nav, which represents a new approach to navigation and allows the investigation of new quantitative measurements in ACL reconstruction, are described. Two clinical studies are reported, as examples of clinical potentiality and correct use of this methodology. In this paper a preliminary analysis of KIN-Nav's reliability and clinical efficacy, performed during blinded repeated measures by three independent examiners, is also given. This analysis is the first assessment of the potential of navigation systems for evaluating knee kinematics.

  2. Functional quantitative susceptibility mapping (fQSM).

    PubMed

    Balla, Dávid Z; Sanchez-Panchuelo, Rosa M; Wharton, Samuel J; Hagberg, Gisela E; Scheffler, Klaus; Francis, Susan T; Bowtell, Richard

    2014-10-15

    Blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) is a powerful technique, typically based on the statistical analysis of the magnitude component of the complex time-series. Here, we additionally interrogated the phase data of the fMRI time-series and used quantitative susceptibility mapping (QSM) in order to investigate the potential of functional QSM (fQSM) relative to standard magnitude BOLD fMRI. High spatial resolution data (1mm isotropic) were acquired every 3 seconds using zoomed multi-slice gradient-echo EPI collected at 7 T in single orientation (SO) and multiple orientation (MO) experiments, the latter involving 4 repetitions with the subject's head rotated relative to B0. Statistical parametric maps (SPM) were reconstructed for magnitude, phase and QSM time-series and each was subjected to detailed analysis. Several fQSM pipelines were evaluated and compared based on the relative number of voxels that were coincidentally found to be significant in QSM and magnitude SPMs (common voxels). We found that sensitivity and spatial reliability of fQSM relative to the magnitude data depended strongly on the arbitrary significance threshold defining "activated" voxels in SPMs, and on the efficiency of spatio-temporal filtering of the phase time-series. Sensitivity and spatial reliability depended slightly on whether MO or SO fQSM was performed and on the QSM calculation approach used for SO data. Our results present the potential of fQSM as a quantitative method of mapping BOLD changes. We also critically discuss the technical challenges and issues linked to this intriguing new technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Initial description of a quantitative, cross-species (chimpanzee-human) social responsiveness measure.

    PubMed

    Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve E; Constantino, John N; Povinelli, Daniel J; Pruett, John R

    2011-05-01

    Comparative studies of social responsiveness, an ability that is impaired in autism spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species (human-chimpanzee) social responsiveness measure. We translated the Social Responsiveness Scale (SRS), an instrument that quantifies human social responsiveness, into an analogous instrument for chimpanzees. We then retranslated this "Chimpanzee SRS" into a human "Cross-Species SRS" (XSRS). We evaluated three groups of chimpanzees (n = 29) with the Chimpanzee SRS and typical and human children with autism spectrum disorder (ASD; n = 20) with the XSRS. The Chimpanzee SRS demonstrated strong interrater reliability at the three sites (ranges for individual ICCs: 0.534 to 0.866; mean ICCs: 0.851 to 0.970). As has been observed in human beings, exploratory principal components analysis of Chimpanzee SRS scores supports a single factor underlying chimpanzee social responsiveness. Human subjects' XSRS scores were fully concordant with their SRS scores (r = 0.976, p = .001) and distinguished appropriately between typical and ASD subjects. One chimpanzee known for inappropriate social behavior displayed a significantly higher score than all other chimpanzees at its site, demonstrating the scale's ability to detect impaired social responsiveness in chimpanzees. Our initial cross-species social responsiveness scale proved reliable and discriminated differences in social responsiveness across (in a relative sense) and within (in a more objectively quantifiable manner) human beings and chimpanzees. Copyright © 2011 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Measuring competence in endoscopic sinus surgery.

    PubMed

    Syme-Grant, J; White, P S; McAleer, J P G

    2008-02-01

    Competence based education is currently being introduced into higher surgical training in the UK. Valid and reliable performance assessment tools are essential to ensure competencies are achieved. No such tools have yet been reported in the UK literature. We sought to develop and pilot test an Endoscopic Sinus Surgery Competence Assessment Tool (ESSCAT). The ESSCAT was designed for in-theatre assessment of higher surgical trainees in the UK. The ESSCAT rating matrix was developed through task analysis of ESS procedures. All otolaryngology consultants and specialist registrars in Scotland were given the opportunity to contribute to its refinement. Two cycles of in-theatre testing were used to ensure utility and gather quantitative data on validity and reliability. Videos of trainees performing surgery were used in establishing inter-rater reliability. National consultation, the consensus derived minimum standard of performance, Cronbach's alpha = 0.89 and demonstration of trainee learning (p = 0.027) during the in vivo application of the ESSCAT suggest a high level of validity. Inter-rater reliability was moderate for competence decisions (Cohen's Kappa = 0.5) and good for total scores (Intra-Class Correlation Co-efficient = 0.63). Intra-rater reliability was good for both competence decisions (Kappa = 0.67) and total scores (Kendall's Tau-b = 0.73). The ESSCAT generates a valid and reliable assessment of trainees' in-theatre performance of endoscopic sinus surgery. In conjunction with ongoing evaluation of the instrument we recommend the use of the ESSCAT in higher specialist training in otolaryngology in the UK.

  5. Quantitative PCR for genetic markers of human fecal pollution

    EPA Science Inventory

    Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires reliable host-specific analytical methods and a rapid quantification approach. We report the development of quantitative PCR assays for enumeration of two recently described hum...

  6. Multicenter Evaluation of a Commercial Cytomegalovirus Quantitative Standard: Effects of Commutability on Interlaboratory Concordance

    PubMed Central

    Shahbazian, M. D.; Valsamakis, A.; Boonyaratanakornkit, J.; Cook, L.; Pang, X. L.; Preiksaitis, J. K.; Schönbrunner, E. R.; Caliendo, A. M.

    2013-01-01

    Commutability of quantitative reference materials has proven important for reliable and accurate results in clinical chemistry. As international reference standards and commercially produced calibration material have become available to address the variability of viral load assays, the degree to which such materials are commutable and the effect of commutability on assay concordance have been questioned. To investigate this, 60 archived clinical plasma samples, which previously tested positive for cytomegalovirus (CMV), were retested by five different laboratories, each using a different quantitative CMV PCR assay. Results from each laboratory were calibrated both with lab-specific quantitative CMV standards (“lab standards”) and with common, commercially available standards (“CMV panel”). Pairwise analyses among laboratories were performed using mean results from each clinical sample, calibrated first with lab standards and then with the CMV panel. Commutability of the CMV panel was determined based on difference plots for each laboratory pair showing plotted values of standards that were within the 95% prediction intervals for the clinical specimens. Commutability was demonstrated for 6 of 10 laboratory pairs using the CMV panel. In half of these pairs, use of the CMV panel improved quantitative agreement compared to use of lab standards. Two of four laboratory pairs for which the CMV panel was noncommutable showed reduced quantitative agreement when that panel was used as a common calibrator. Commutability of calibration material varies across different quantitative PCR methods. Use of a common, commutable quantitative standard can improve agreement across different assays; use of a noncommutable calibrator can reduce agreement among laboratories. PMID:24025907

  7. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    PubMed

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  8. Quality evaluation of Shenmaidihuang Pills based on the chromatographic fingerprints and simultaneous determination of seven bioactive constituents.

    PubMed

    Liu, Sifei; Zhang, Guangrui; Qiu, Ying; Wang, Xiaobo; Guo, Lihan; Zhao, Yanxin; Tong, Meng; Wei, Lan; Sun, Lixin

    2016-12-01

    In this study, we aimed to establish a comprehensive and practical quality evaluation system for Shenmaidihuang pills. A simple and reliable high-performance liquid chromatography coupled with photodiode array detection method was developed both for fingerprint analysis and quantitative determination. In fingerprint analysis, relative retention time and relative peak area were used to identify the common peaks in 18 samples for investigation. Twenty one peaks were selected as the common peaks to evaluate the similarities of 18 Shenmaidihuang pills samples with different manufacture dates. Furthermore, similarity analysis was applied to evaluate the similarity of samples. Hierarchical cluster analysis and principal component analysis were also performed to evaluate the variation of Shenmaidihuang pills. In quantitative analysis, linear regressions, injection precisions, recovery, repeatability and sample stability were all tested and good results were obtained to simultaneously determine the seven identified compounds, namely, 5-hydroxymethylfurfural, morroniside, loganin, paeonol, paeoniflorin, psoralen, isopsoralen in Shenmaidihuang pills. The contents of some analytes in different batches of samples indicated significant difference, especially for 5-hydroxymethylfurfural. So, it was concluded that the chromatographic fingerprint method obtained by high-performance liquid chromatography coupled with photodiode array detection associated with multiple compounds determination is a powerful and meaningful tool to comprehensively conduct the quality control of Shenmaidihuang pills. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Reaching and writing movements: sensitive and reliable tools to measure genetic dystonia in children.

    PubMed

    Casellato, Claudia; Zorzi, Giovanna; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Nardocci, Nardo

    2011-07-01

    The aim of this study was to provide a quantitative assessment of pure dystonia in a group of children. Kinematic and muscular characteristics of unconstrained movements of the upper limb, reaching and writing, were investigated. During reaching, the distinguishing factors of dystonic movement were reduced velocity, loss of muscular activation focalization, and impairment of rest-movement modulation. Muscular parameters were able to linearly discriminate the different levels of severity. These results support the hypothesis that basal ganglia dysfunction is responsible for compromising the motor activity focusing. The handwriting movement revealed that the kinematic coordination was altered depending on dystonia severity scores. The 2 protocols revealed themselves feasible and sensitive for detecting even local and subclinical signs. Hence, this work provides a contribution toward a reliable assessment of pure dystonia, crucial for clinical characterization of patients and evaluation of the different treatment options.

  10. NDE research efforts at the FAA Center for Aviation Systems Reliability

    NASA Technical Reports Server (NTRS)

    Thompson, Donald O.; Brasche, Lisa J. H.

    1992-01-01

    The Federal Aviation Administration-Center for Aviation Systems Reliability (FAA-CASR), a part of the Institute for Physical Research and Technology at Iowa State University, began operation in the Fall of 1990 with funding from the FAA. The mission of the FAA-CASR is to develop quantitative nondestructive evaluation (NDE) methods for aircraft structures and materials including prototype instrumentation, software, techniques, and procedures and to develop and maintain comprehensive education and training programs in aviation specific inspection procedures and practices. To accomplish this mission, FAA-CASR brings together resources from universities, government, and industry to develop a comprehensive approach to problems specific to the aviation industry. The problem areas are targeted by the FAA, aviation manufacturers, the airline industry and other members of the aviation business community. This consortium approach ensures that the focus of the efforts is on relevant problems and also facilitates effective transfer of the results to industry.

  11. A simple quantitative diagnostic alternative for MGMT DNA-methylation testing on RCL2 fixed paraffin embedded tumors using restriction coupled qPCR.

    PubMed

    Pulverer, Walter; Hofner, Manuela; Preusser, Matthias; Dirnberger, Elisabeth; Hainfellner, Johannes A; Weinhaeusel, Andreas

    2014-01-01

    MGMT promoter methylation is associated with favorable prognosis and chemosensitivity in glioblastoma multiforme (GBM), especially in elderly patients. We aimed to develop a simple methylation-sensitive restriction enzyme (MSRE)-based quantitative PCR (qPCR) assay, allowing the quantification of MGMT promoter methylation. DNA was extracted from non-neoplastic brain (n = 24) and GBM samples (n = 20) upon 3 different sample conservation conditions (-80 °C, formalin-fixed and paraffin-embedded (FFPE); RCL2-fixed). We evaluated the suitability of each fixation method with respect to the MSRE-coupled qPCR methylation analyses. Methylation data were validated by MALDITOF. qPCR was used for evaluation of alternative tissue conservation procedures. DNA from FFPE tissue failed reliable testing; DNA from both RCL2-fixed and fresh frozen tissues performed equally well and was further used for validation of the quantitative MGMT methylation assay (limit of detection (LOD): 19.58 pg), using individual's undigested sample DNA for calibration. MGMT methylation analysis in non-neoplastic brain identified a background methylation of 0.10 ± 11% which we used for defining a cut-off of 0.32% for patient stratification. Of GBM patients 9 were MGMT methylationpositive (range: 0.56 - 91.95%), and 11 tested negative. MALDI-TOF measurements resulted in a concordant classification of 94% of GBM samples in comparison to qPCR. The presented methodology allows quantitative MGMT promoter methylation analyses. An amount of 200 ng DNA is sufficient for triplicate analyses including control reactions and individual calibration curves, thus excluding any DNA qualityderived bias. The combination of RCL2-fixation and quantitative methylation analyses improves pathological routine examination when histological and molecular analyses on limited amounts of tumor samples are necessary for patient stratification.

  12. Ultrasonic test of resistance spot welds based on wavelet package analysis.

    PubMed

    Liu, Jing; Xu, Guocheng; Gu, Xiaopeng; Zhou, Guanghao

    2015-02-01

    In this paper, ultrasonic test of spot welds for stainless steel sheets has been studied. It is indicated that traditional ultrasonic signal analysis in either time domain or frequency domain remains inadequate to evaluate the nugget diameter of spot welds. However, the method based on wavelet package analysis in time-frequency domain can easily distinguish the nugget from the corona bond by extracting high-frequency signals in different positions of spot welds, thereby quantitatively evaluating the nugget diameter. The results of ultrasonic test fit the actual measured value well. Mean value of normal distribution of error statistics is 0.00187, and the standard deviation is 0.1392. Furthermore, the quality of spot welds was evaluated, and it is showed ultrasonic nondestructive test based on wavelet packet analysis can be used to evaluate the quality of spot welds, and it is more reliable than single tensile destructive test. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Portfolio as a tool to evaluate clinical competences of traumatology in medical students

    PubMed Central

    Santonja-Medina, Fernando; García-Sanz, M Paz; Martínez-Martínez, Francisco; Bó, David; García-Estañ, Joaquín

    2016-01-01

    This article investigates whether a reflexive portfolio is instrumental in determining the level of acquisition of clinical competences in traumatology, a subject in the 5th year of the degree of medicine. A total of 131 students used the portfolio during their clinical rotation of traumatology. The students’ portfolios were blind evaluated by four professors who annotated the existence (yes/no) of 23 learning outcomes. The reliability of the portfolio was moderate, according to the kappa index (0.48), but the evaluation scores between evaluators were very similar. Considering the mean percentage, 59.8% of the students obtained all the competences established and only 13 of the 23 learning outcomes (56.5%) were fulfilled by >50% of the students. Our study suggests that the portfolio may be an important tool to quantitatively analyze the acquisition of traumatology competences of medical students, thus allowing the implementation of methods to improve its teaching. PMID:26929675

  14. Portfolio as a tool to evaluate clinical competences of traumatology in medical students.

    PubMed

    Santonja-Medina, Fernando; García-Sanz, M Paz; Martínez-Martínez, Francisco; Bó, David; García-Estañ, Joaquín

    2016-01-01

    This article investigates whether a reflexive portfolio is instrumental in determining the level of acquisition of clinical competences in traumatology, a subject in the 5th year of the degree of medicine. A total of 131 students used the portfolio during their clinical rotation of traumatology. The students' portfolios were blind evaluated by four professors who annotated the existence (yes/no) of 23 learning outcomes. The reliability of the portfolio was moderate, according to the kappa index (0.48), but the evaluation scores between evaluators were very similar. Considering the mean percentage, 59.8% of the students obtained all the competences established and only 13 of the 23 learning outcomes (56.5%) were fulfilled by >50% of the students. Our study suggests that the portfolio may be an important tool to quantitatively analyze the acquisition of traumatology competences of medical students, thus allowing the implementation of methods to improve its teaching.

  15. Normalization of Reverse Transcription Quantitative PCR Data During Ageing in Distinct Cerebral Structures.

    PubMed

    Bruckert, G; Vivien, D; Docagne, F; Roussel, B D

    2016-04-01

    Reverse transcription quantitative-polymerase chain reaction (RT-qPCR) has become a routine method in many laboratories. Normalization of data from experimental conditions is critical for data processing and is usually achieved by the use of a single reference gene. Nevertheless, as pointed by the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines, several reference genes should be used for reliable normalization. Ageing is a physiological process that results in a decline of many expressed genes. Reliable normalization of RT-qPCR data becomes crucial when studying ageing. Here, we propose a RT-qPCR study from four mouse brain regions (cortex, hippocampus, striatum and cerebellum) at different ages (from 8 weeks to 22 months) in which we studied the expression of nine commonly used reference genes. With the use of two different algorithms, we found that all brain structures need at least two genes for a good normalization step. We propose specific pairs of gene for efficient data normalization in the four brain regions studied. These results underline the importance of reliable reference genes for specific brain regions in ageing.

  16. The De-Escalating Aggressive Behaviour Scale: development and psychometric testing.

    PubMed

    Nau, Johannes; Halfens, Ruud; Needham, Ian; Dassen, Theo

    2009-09-01

    This paper is a report of a study to develop and test the psychometric properties of a scale measuring nursing students' performance in de-escalation of aggressive behaviour. Successful training should lead not merely to more knowledge and amended attitudes but also to improved performance. However, the quality of de-escalation performance is difficult to assess. Based on a qualitative investigation, seven topics pertaining to de-escalating behaviour were identified and the wording of items tested. The properties of the items and the scale were investigated quantitatively. A total of 1748 performance evaluations by students (rater group 1) from a skills laboratory were used to check distribution and conduct a factor analysis. Likewise, 456 completed evaluations by de-escalation experts (rater group 2) of videotaped performances at pre- and posttest were used to investigate internal consistency, interrater reliability, test-retest reliability, effect size and factor structure. Data were collected in 2007-2008 in German. Factor analysis showed a unidimensional 7-item scale with factor loadings ranging from 0.55 to 0.81 (rater group 1) and 0.48 to 0.88 (rater group 2). Cronbach's alphas of 0.87 and 0.88 indicated good internal consistency irrespective of rater group. A Pearson's r of 0.80 confirmed acceptable test-retest reliability, and interrater reliability Intraclass Correlation 3 ranging from 0.77 to 0.93 also showed acceptable results. The effect size r of 0.53 plus Cohen's d of 1.25 indicates the capacity of the scale to detect changes in performance. Further research is needed to test the English version of the scale and its validity.

  17. Environmental Profile of a Community’s Health (EPOCH): An Ecometric Assessment of Measures of the Community Environment Based on Individual Perception

    PubMed Central

    Corsi, Daniel J.; Subramanian, S. V.; McKee, Martin; Li, Wei; Swaminathan, Sumathi; Lopez-Jaramillo, Patricio; Avezum, Alvaro; Lear, Scott A.; Dagenais, Gilles; Rangarajan, Sumathy; Teo, Koon; Yusuf, Salim; Chow, Clara K.

    2012-01-01

    Background Public health research has turned towards examining upstream, community-level determinants of cardiovascular disease risk factors. Objective measures of the environment, such as those derived from direct observation, and perception-based measures by residents have both been associated with health behaviours. However, current methods are generally limited to objective measures, often derived from administrative data, and few instruments have been evaluated for use in rural areas or in low-income countries. We evaluate the reliability of a quantitative tool designed to capture perceptions of community tobacco, nutrition, and social environments obtained from interviews with residents in communities in 5 countries. Methodology/ Principal Findings Thirteen measures of the community environment were developed from responses to questionnaire items from 2,360 individuals residing in 84 urban and rural communities in 5 countries (China, India, Brazil, Colombia, and Canada) in the Environmental Profile of a Community’s Health (EPOCH) study. Reliability and other properties of the community-level measures were assessed using multilevel models. High reliability (>0.80) was demonstrated for all community-level measures at the mean number of survey respondents per community (n = 28 respondents). Questionnaire items included in each scale were found to represent a common latent factor at the community level in multilevel factor analysis models. Conclusions/ Significance Reliable measures which represent aspects of communities potentially related to cardiovascular disease (CVD)/risk factors can be obtained using feasible sample sizes. The EPOCH instrument is suitable for use in different settings to explore upstream determinants of CVD/risk factors. PMID:22973446

  18. High-resolution audiometry: an automated method for hearing threshold acquisition with quality control.

    PubMed

    Bian, Lin

    2012-01-01

    In clinical practice, hearing thresholds are measured at only five to six frequencies at octave intervals. Thus, the audiometric configuration cannot closely reflect the actual status of the auditory structures. In addition, differential diagnosis requires quantitative comparison of behavioral thresholds with physiological measures, such as otoacoustic emissions (OAEs) that are usually measured in higher resolution. The purpose of this research was to develop a method to improve the frequency resolution of the audiogram. A repeated-measure design was used in the study to evaluate the reliability of the threshold measurements. A total of 16 participants with clinically normal hearing and mild hearing loss were recruited from a population of university students. No intervention was involved in the study. Custom developed system and software were used for threshold acquisition with quality control (QC). With real-ear calibration and monitoring of test signals, the system provided accurate and individualized measure of hearing thresholds that were determined by an analysis based on signal detection theory (SDT). The reliability of the threshold measure was assessed by correlation and differences between the repeated measures. The audiometric configurations were diverse and unique to each individual ear. The accuracy, within-subject reliability, and between-test repeatability are relatively high. With QC, the high-resolution audiograms can be reliably and accurately measured. Hearing thresholds measured as ear canal sound pressures with higher frequency resolution can provide more customized hearing-aid fitting. The test system may be integrated with other physiological measures, such as OAEs, into a comprehensive evaluative tool. American Academy of Audiology.

  19. PET with the HIDAC camera?

    NASA Astrophysics Data System (ADS)

    Townsend, D. W.

    1988-06-01

    In 1982 the first prototype high density avalanche chamber (HIDAC) positron camera became operational in the Division of Nuclear Medicine of Geneva University Hospital. The camera consisted of dual 20 cm × 20 cm HIDAC detectors mounted on a rotating gantry. In 1984, these detectors were replaced by 30 cm × 30 cm detectors with improved performance and reliability. Since then, the larger detectors have undergone clinical evaluation. This article discusses certain aspects of the evaluation program and the conclusions that can be drawn from the results. The potential of the HIDAC camera for quantitative positron emission tomography (PET) is critically examined, and its performance compared with a state-of-the-art, commercial ring camera. Guidelines for the design of a future HIDAC camera are suggested.

  20. Communication among neurons.

    PubMed

    Marner, Lisbeth

    2012-04-01

    The communication among neurons is the prerequisite for the working brain. To understand the cellular, neurochemical, and structural basis of this communication, and the impacts of aging and disease on brain function, quantitative measures are necessary. This thesis evaluates several quantitative neurobiological methods with respect to possible bias and methodological issues. Stereological methods are suited for the unbiased estimation of number, length, and volumes of components of the nervous system. Stereological estimates of the total length of myelinated nerve fibers were made in white matter of post mortem brains, and the impact of aging and diseases as Schizophrenia and Alzheimer's disease were evaluated. Although stereological methods are in principle unbiased, shrinkage artifacts are difficult to account for. Positron emission tomography (PET) recordings, in conjunction with kinetic modeling, permit the quantitation of radioligand binding in brain. The novel serotonin 5-HT4 antagonist [11C]SB207145 was used as an example of the validation process for quantitative PET receptor imaging. Methods based on reference tissue as well as methods based on an arterial plasma input function were evaluated with respect to precision and accuracy. It was shown that [11C]SB207145 binding had high sensitivity to occupancy by unlabeled ligand, necessitating high specific activity in the radiosynthesis to avoid bias. The established serotonin 5-HT2A ligand [18F]altanersin was evaluated in a two-year follow-up study in elderly subjects. Application of partial volume correction of the PET data diminished the reliability of the measures, but allowed for the correct distinction between changes due to brain atrophy and receptor availability. Furthermore, a PET study of patients with Alzheimer's disease with the serotonin transporter ligand [11C]DASB showed relatively preserved serotonergic projections, despite a marked decrease in 5-HT2A receptor binding. Possible confounders are considered and the relation to the prevailing beta-amyloid hypothesis is discussed.

  1. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    PubMed Central

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  2. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders.

    PubMed

    Fuentes Fernández, Ramón; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Oporto Venegas, Gonzalo Hernán; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-04-01

    Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. The variable postures of 78 subjects (36 men, 42 women; age 18-24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen's Kappa coefficient (> 0.87) and Pearson's correlation coefficient (r = 0.824, > 80%). This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. Copyright © Singapore Medical Association.

  3. Quantitative Tumor Segmentation for Evaluation of Extent of Glioblastoma Resection to Facilitate Multisite Clinical Trials12

    PubMed Central

    Cordova, James S; Schreibmann, Eduard; Hadjipanayis, Costas G; Guo, Ying; Shu, Hui-Kuo G; Shim, Hyunsuk; Holder, Chad A

    2014-01-01

    Standard-of-care therapy for glioblastomas, the most common and aggressive primary adult brain neoplasm, is maximal safe resection, followed by radiation and chemotherapy. Because maximizing resection may be beneficial for these patients, improving tumor extent of resection (EOR) with methods such as intraoperative 5-aminolevulinic acid fluorescence-guided surgery (FGS) is currently under evaluation. However, it is difficult to reproducibly judge EOR in these studies due to the lack of reliable tumor segmentation methods, especially for postoperative magnetic resonance imaging (MRI) scans. Therefore, a reliable, easily distributable segmentation method is needed to permit valid comparison, especially across multiple sites. We report a segmentation method that combines versatile region-of-interest blob generation with automated clustering methods. We applied this to glioblastoma cases undergoing FGS and matched controls to illustrate the method's reliability and accuracy. Agreement and interrater variability between segmentations were assessed using the concordance correlation coefficient, and spatial accuracy was determined using the Dice similarity index and mean Euclidean distance. Fuzzy C-means clustering with three classes was the best performing method, generating volumes with high agreement with manual contouring and high interrater agreement preoperatively and postoperatively. The proposed segmentation method allows tumor volume measurements of contrast-enhanced T1-weighted images in the unbiased, reproducible fashion necessary for quantifying EOR in multicenter trials. PMID:24772206

  4. An Observed Structured Teaching Evaluation Demonstrates the Impact of a Resident-as-Teacher Curriculum on Teaching Competency.

    PubMed

    Zackoff, Matthew; Jerardi, Karen; Unaka, Ndidi; Sucharew, Heidi; Klein, Melissa

    2015-06-01

    Residents play a critical role in the education of peers and medical students, yet attainment of teaching skills is not routinely assessed. The primary aim of this study was to develop a novel, skill-based Observed Structured Teaching Evaluation (OSTE) and self-assessment survey to measure the impact of a resident-as-teacher curriculum on teaching competency. The secondary aim was to determine interrater reliability of the OSTE. A prospective study quantitatively assessed intern teaching competency via videotaped teaching encounters (videos) before and after a month-long hospital medicine rotation and self-assessment surveys over a 5-month period. The intervention group received the resident-as-teacher curriculum. Videos were evaluated by 2 blinded faculty via an OSTE covering 9 skills within 3 core components: preparation, teaching, and reflection. Pre- to post-HM rotation month differences were evaluated within and between groups using the Wilcoxon signed rank test and Wilcoxon rank-sum test, respectively. Twenty-two of 25 (88%) control and 27 of 28 (96%) intervention interns participated; 100% of participants completed the study. The intervention group's pre-post difference for the total OSTE score and the average self-assessed competence statistically improved; however, no significant difference was seen between groups. The difference in preparation scores was significant for the intervention compared with the control. The OSTE's interrater reliability demonstrated good agreement with weighted kappas of 0.86 for preparation, 0.71 for teaching, and 0.93 for reflection. Implementation of an objective, skill-based OSTE detected observable changes in interns' teaching competency after implementation of a brief resident-as-teacher curriculum. The OSTE's good interrater reliability may allow standardized assessment of skill attainment over time. Copyright © 2015 by the American Academy of Pediatrics.

  5. A New Algorithm Using Cross-Assignment for Label-Free Quantitation with LC/LTQ-FT MS

    PubMed Central

    Andreev, Victor P.; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L.

    2008-01-01

    A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQFT MS mass spectrometer (or other high resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed “cross-assignment”, is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC/MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of E.coli samples spiked with known amounts of non-E.coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC/MS datasets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication. PMID:17441747

  6. A new algorithm using cross-assignment for label-free quantitation with LC-LTQ-FT MS.

    PubMed

    Andreev, Victor P; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L

    2007-06-01

    A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQ-FT MS mass spectrometer (or other high-resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed "cross-assignment", is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC-MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of Escherichia coli samples spiked with known amounts of non-E. coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC-MS data sets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication.

  7. Evaluation of quantitative PCR measurement of bacterial colonization of epithelial cells.

    PubMed

    Schmidt, Marcin T; Olejnik-Schmidt, Agnieszka K; Myszka, Kamila; Borkowska, Monika; Grajek, Włodzimierz

    2010-01-01

    Microbial colonization is an important step in establishing pathogenic or probiotic relations to host cells and in biofilm formation on industrial or medical devices. The aim of this work was to verify the applicability of quantitative PCR (Real-Time PCR) to measure bacterial colonization of epithelial cells. Salmonella enterica and Caco-2 intestinal epithelial cell line was used as a model. To verify sensitivity of the assay a competition of the pathogen cells to probiotic microorganism was tested. The qPCR method was compared to plate count and radiolabel approach, which are well established techniques in this area of research. The three methods returned similar results. The best quantification accuracy had radiolabel method, followed by qPCR. The plate count results showed coefficient of variation two-times higher than this of qPCR. The quantitative PCR proved to be a reliable method for enumeration of microbes in colonization assay. It has several advantages that make it very useful in case of analyzing mixed populations, where several different species or even strains can be monitored at the same time.

  8. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  9. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  10. Use of a deuterated internal standard with pyrolysis-GC/MS dimeric marker analysis to quantify tire tread particles in the environment.

    PubMed

    Unice, Kenneth M; Kreider, Marisa L; Panko, Julie M

    2012-11-08

    Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories.

  11. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  12. Combined qualitative and quantitative research designs.

    PubMed

    Seymour, Jane

    2012-12-01

    Mixed methods research designs have been recognized as important in addressing complexity and are recommended particularly in the development and evaluation of complex interventions. This article reports a review of studies in palliative care published between 2010 and March 2012 that combine qualitative and quantitative approaches. A synthesis of approaches to mixed methods research taken in 28 examples of published research studies of relevance to palliative and supportive care is provided, using a typology based on a classic categorization put forward in 1992. Mixed-method studies are becoming more frequently employed in palliative care research and resonate with the complexity of the palliative care endeavour. Undertaking mixed methods research requires a sophisticated understanding of the research process and recognition of some of the underlying complexities encountered when working with different traditions and perspectives on issues of: sampling, validity, reliability and rigour, different sources of data and different data collection and analysis techniques.

  13. [Multiplex real-time PCR method for rapid detection of Marburg virus and Ebola virus].

    PubMed

    Yang, Yu; Bai, Lin; Hu, Kong-Xin; Yang, Zhi-Hong; Hu, Jian-Ping; Wang, Jing

    2012-08-01

    Marburg virus and Ebola virus are acute infections with high case fatality rates. A rapid, sensitive detection method was established to detect Marburg virus and Ebola virus by multiplex real-time fluorescence quantitative PCR. Designing primers and Taqman probes from highly conserved sequences of Marburg virus and Ebola virus through whole genome sequences alignment, Taqman probes labeled by FAM and Texas Red, the sensitivity of the multiplex real-time quantitative PCR assay was optimized by evaluating the different concentrations of primers and Probes. We have developed a real-time PCR method with the sensitivity of 30.5 copies/microl for Marburg virus positive plasmid and 28.6 copies/microl for Ebola virus positive plasmids, Japanese encephalitis virus, Yellow fever virus, Dengue virus were using to examine the specificity. The Multiplex real-time PCR assays provide a sensitive, reliable and efficient method to detect Marburg virus and Ebola virus simultaneously.

  14. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2001-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  15. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2002-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  16. Quantitative analysis of spatial variability of geotechnical parameters

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  17. Rapid and Inexpensive Screening of Genomic Copy Number Variations Using a Novel Quantitative Fluorescent PCR Method

    PubMed Central

    Han, Joan C.; Elsea, Sarah H.; Pena, Heloísa B.; Pena, Sérgio Danilo Junho

    2013-01-01

    Detection of human microdeletion and microduplication syndromes poses significant burden on public healthcare systems in developing countries. With genome-wide diagnostic assays frequently inaccessible, targeted low-cost PCR-based approaches are preferred. However, their reproducibility depends on equally efficient amplification using a number of target and control primers. To address this, the recently described technique called Microdeletion/Microduplication Quantitative Fluorescent PCR (MQF-PCR) was shown to reliably detect four human syndromes by quantifying DNA amplification in an internally controlled PCR reaction. Here, we confirm its utility in the detection of eight human microdeletion syndromes, including the more common WAGR, Smith-Magenis, and Potocki-Lupski syndromes with 100% sensitivity and 100% specificity. We present selection, design, and performance evaluation of detection primers using variety of approaches. We conclude that MQF-PCR is an easily adaptable method for detection of human pathological chromosomal aberrations. PMID:24288428

  18. Pulse compression favourable aperiodic infrared imaging approach for non-destructive testing and evaluation of bio-materials

    NASA Astrophysics Data System (ADS)

    Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath

    2017-05-01

    In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.

  19. Preliminary evaluation of adhesion strength measurement devices for ceramic/titanium matrix composite bonds

    NASA Technical Reports Server (NTRS)

    Pohlchuck, Bobby; Zeller, Mary V.

    1992-01-01

    The adhesive bond between ceramic cement and a titanium matrix composite substrate to be used in the National Aerospace Plane program is evaluated. Two commercially available adhesion testers, the Sebastian Adherence Tester and the CSEM REVETEST Scratch Tester, are evaluated to determine their suitability for quantitatively measuring adhesion strength. Various thicknesses of cements are applied to several substrates, and bond strengths are determined with both testers. The Sabastian Adherence Tester has provided limited data due to an interference from the sample mounting procedure, and has been shown to be incapable of distinguishing adhesion strength from tensile and shear properties of the cement itself. The data from the scratch tester has been found to be difficult to interpret due to the porosity and hardness of the cement. Recommendations are proposed for a more reliable adhesion test method.

  20. Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations.

    PubMed

    Cypress, Brigitte S

    Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.

  1. Rating the raters in a mixed model: An approach to deciphering the rater reliability

    NASA Astrophysics Data System (ADS)

    Shang, Junfeng; Wang, Yougui

    2013-05-01

    Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.

  2. On Trust Evaluation in Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Dang Quan; Lamont, Louise; Mason, Peter C.

    Trust has been considered as a social relationship between two individuals in human society. But, as computer science and networking have succeeded in using computers to automate many tasks, the concept of trust can be generalized to cover the reliability and relationships of non-human interaction, such as, for example, information gathering and data routing. This paper investigates the evaluation of trust in the context of ad hoc networks. Nodes evaluate each other’s behaviour based on observables. A node then decides whether to trust another node to have certain innate abilities. We show how accurate such an evaluation could be. We also provide the minimum number of observations required to obtain an accurate evaluation, a result that indicates that observation-based trust in ad hoc networks will remain a challenging problem. The impact of making networking decisions using trust evaluation on the network connectivity is also examined. In this manner, quantitative decisions can be made concerning trust-based routing with the knowledge of the potential impact on connectivity.

  3. The challenge of identifying greenhouse gas-induced climatic change

    NASA Technical Reports Server (NTRS)

    Maccracken, Michael C.

    1992-01-01

    Meeting the challenge of identifying greenhouse gas-induced climatic change involves three steps. First, observations of critical variables must be assembled, evaluated, and analyzed to determine that there has been a statistically significant change. Second, reliable theoretical (model) calculations must be conducted to provide a definitive set of changes for which to search. Third, a quantitative and statistically significant association must be made between the projected and observed changes to exclude the possibility that the changes are due to natural variability or other factors. This paper provides a qualitative overview of scientific progress in successfully fulfilling these three steps.

  4. [Characteristics of quantitative values of regional factors of exposure in the studied areas].

    PubMed

    Rakhmanin, Iu A; Shashina, T A; Ungurianu, T N; Novikov, S M; Skvortsova, N S; Matsiuk, A V; Legostaeva, T B; Antipanova, N A

    2012-01-01

    In the paper the results of a comparative evaluation of the Russian and the standard, recommended by US EPA, factors of population exposure in seven areas of different federal districts of Russia are presented. Concerning the adult population differences reach 3.5 times, for children (1-6 years) - 4.2 times. An example of the effect of regional differences and standard factors on levels of exposure and risk is considered. Promising areas for further research on regional factors to improve the accuracy and reliability of the forecast assessments of the risks to public health have been identified.

  5. Quantitative analysis of drug distribution by ambient mass spectrometry imaging method with signal extinction normalization strategy and inkjet-printing technology.

    PubMed

    Luo, Zhigang; He, Jingjing; He, Jiuming; Huang, Lan; Song, Xiaowei; Li, Xin; Abliz, Zeper

    2018-03-01

    Quantitative mass spectrometry imaging (MSI) is a robust approach that provides both quantitative and spatial information for drug candidates' research. However, because of complicated signal suppression and interference, acquiring accurate quantitative information from MSI data remains a challenge, especially for whole-body tissue sample. Ambient MSI techniques using spray-based ionization appear to be ideal for pharmaceutical quantitative MSI analysis. However, it is more challenging, as it involves almost no sample preparation and is more susceptible to ion suppression/enhancement. Herein, based on our developed air flow-assisted desorption electrospray ionization (AFADESI)-MSI technology, an ambient quantitative MSI method was introduced by integrating inkjet-printing technology with normalization of the signal extinction coefficient (SEC) using the target compound itself. The method utilized a single calibration curve to quantify multiple tissue types. Basic blue 7 and an antitumor drug candidate (S-(+)-deoxytylophorinidine, CAT) were chosen to initially validate the feasibility and reliability of the quantitative MSI method. Rat tissue sections (heart, kidney, and brain) administered with CAT was then analyzed. The quantitative MSI analysis results were cross-validated by LC-MS/MS analysis data of the same tissues. The consistency suggests that the approach is able to fast obtain the quantitative MSI data without introducing interference into the in-situ environment of the tissue sample, and is potential to provide a high-throughput, economical and reliable approach for drug discovery and development. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Development of the Chinese version of the Hospital Autonomy Questionnaire: a cross-sectional study in Guangdong Province

    PubMed Central

    Liu, Zifeng; Yuan, Lianxiong; Huang, Yixiang; Zhang, Lingling; Luo, Futian

    2016-01-01

    Objective We aimed to develop a questionnaire for quantitative evaluation of the autonomy of public hospitals in China. Method An extensive literature review was conducted to select possible items for inclusion in the questionnaire, which was then reviewed by 5 experts. After a two-round Delphi method, we distributed the questionnaire to 404 secondary and tertiary hospitals in Guangdong Province, China, and 379 completed questionnaires were collected. The final questionnaire was then developed on the basis of the results of exploratory and confirmatory factor analysis. Results Analysis suggested that all internal consistency reliabilities exceeded the minimum reliability standard of 0.70 for the α coefficient. The overall scale coefficient was 0.87, and 6 subscale coefficients were 0.92 (strategic management), 0.81 (budget and expenditure), 0.85 (financing), 0.75 (financing, medical management), 0.86 (human resources) and 0.86 (accountability). Correlation coefficients between and among items and their hypothesised subscales were higher than those with other subscales. The value of average variance extracted (AVE) was higher than 0.5, the value of construct reliability (CR) was higher than 0.7, and the square roots of the AVE of each subscale were larger than the correlation of the specific subscale with the other subscales, supporting the convergent and discriminant validity of the Chinese version of the Hospital Autonomy Questionnaire (CVHAQ). The model fit indices were all acceptable: χ2/df=1.73, Goodness of Fit Index (GFI) = 0.93, Adjusted Goodness of Fit Index (AGFI) = 0.91, Non-Normed Fit Index (NNFI) = 0.96, Comparative Fit Index (CFI) = 0.97, Root Mean Square Error of Approximation (RMSEA) = 0.04, Standardised Root Mean Square Residual (SRMR) = 0.07. Conclusions This study demonstrated the reliability and validity of a CVHAQ and provides a quantitative method for the assessment of hospital autonomy. PMID:26911587

  7. Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review.

    PubMed

    Pace, Romina; Pluye, Pierre; Bartlett, Gillian; Macaulay, Ann C; Salsberg, Jon; Jagosh, Justin; Seller, Robbyn

    2012-01-01

    Systematic literature reviews identify, select, appraise, and synthesize relevant literature on a particular topic. Typically, these reviews examine primary studies based on similar methods, e.g., experimental trials. In contrast, interest in a new form of review, known as mixed studies review (MSR), which includes qualitative, quantitative, and mixed methods studies, is growing. In MSRs, reviewers appraise studies that use different methods allowing them to obtain in-depth answers to complex research questions. However, appraising the quality of studies with different methods remains challenging. To facilitate systematic MSRs, a pilot Mixed Methods Appraisal Tool (MMAT) has been developed at McGill University (a checklist and a tutorial), which can be used to concurrently appraise the methodological quality of qualitative, quantitative, and mixed methods studies. The purpose of the present study is to test the reliability and efficiency of a pilot version of the MMAT. The Center for Participatory Research at McGill conducted a systematic MSR on the benefits of Participatory Research (PR). Thirty-two PR evaluation studies were appraised by two independent reviewers using the pilot MMAT. Among these, 11 (34%) involved nurses as researchers or research partners. Appraisal time was measured to assess efficiency. Inter-rater reliability was assessed by calculating a kappa statistic based on dichotomized responses for each criterion. An appraisal score was determined for each study, which allowed the calculation of an overall intra-class correlation. On average, it took 14 min to appraise a study (excluding the initial reading of articles). Agreement between reviewers was moderate to perfect with regards to MMAT criteria, and substantial with respect to the overall quality score of appraised studies. The MMAT is unique, thus the reliability of the pilot MMAT is promising, and encourages further development. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. 76 FR 63301 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-12

    ... INFORMATION: Title: Comparative Effectiveness Research Inventory. Abstract: The information collection... will not be used for quantitative information collections that are designed to yield reliably... mechanisms that are designed to yield quantitative results. The Agency received no comments in response to...

  9. The influence of biological and technical factors on quantitative analysis of amyloid PET: Points to consider and recommendations for controlling variability in longitudinal data.

    PubMed

    Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William

    2015-09-01

    In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Study protocol of psychometric properties of the Spanish translation of a competence test in evidence based practice: the Fresno test.

    PubMed

    Argimon-Pallàs, Josep M; Flores-Mateo, Gemma; Jiménez-Villa, Josep; Pujol-Ribera, Enriqueta; Foz, Gonçal; Bundó-Vidiella, Magda; Juncosa, Sebastià; Fuentes-Bellido, Cruz M; Pérez-Rodríguez, Belén; Margalef-Pallarès, Francesc; Villafafila-Ferrero, Rosa; Forès-Garcia, Dolors; Roman-Martínez, Josep; Vilert-Garroga, Esther

    2009-02-24

    There are few high-quality instruments for evaluating the effectiveness of Evidence-Based Practice (EBP) curricula with objective outcomes measures. The Fresno test is an instrument that evaluates most of EBP steps with a high reliability and validity in the English original version. The present study has the aims to translate the Fresno questionnaire into Spanish and its subsequent validation to ensure the equivalence of the Spanish version against the English original. The questionnaire will be translated with the back translation technique and tested in Primary Care Teaching Units in Catalonia (PCTU). Participants will be: (a) tutors of Family Medicine residents (expert group); (b) Family Medicine residents in their second year of the Family Medicine training program (novice group), and (c) Family Medicine physicians (intermediate group). The questionnaire will be administered before and after an educational intervention. The educational intervention will be an interactive four half-day sessions designed to develop the knowledge and skills required to EBP. Responsiveness statistics used in the analysis will be the effect size, the standardised response mean and Guyatt's method. For internal consistency reliability, two measures will be used: corrected item-total correlations and Cronbach's alpha. Inter-rater reliability will be tested using Kappa coefficient for qualitative items and intra-class correlation coefficient for quantitative items and the overall score. Construct validity, item difficulty, item discrimination and feasibility will be determined. The validation of the Fresno questionnaire into different languages will enable the expansion of the questionnaire, as well as allowing comparison between countries and the evaluation of different teaching models.

  11. Magnetic resonance imaging can accurately assess the long-term progression of knee structural changes in experimental dog osteoarthritis.

    PubMed

    Boileau, C; Martel-Pelletier, J; Abram, F; Raynauld, J-P; Troncy, E; D'Anjou, M-A; Moreau, M; Pelletier, J-P

    2008-07-01

    Osteoarthritis (OA) structural changes take place over decades in humans. MRI can provide precise and reliable information on the joint structure and changes over time. In this study, we investigated the reliability of quantitative MRI in assessing knee OA structural changes in the experimental anterior cruciate ligament (ACL) dog model of OA. OA was surgically induced by transection of the ACL of the right knee in five dogs. High resolution three dimensional MRI using a 1.5 T magnet was performed at baseline, 4, 8 and 26 weeks post surgery. Cartilage volume/thickness, cartilage defects, trochlear osteophyte formation and subchondral bone lesion (hypersignal) were assessed on MRI images. Animals were killed 26 weeks post surgery and macroscopic evaluation was performed. There was a progressive and significant increase over time in the loss of knee cartilage volume, the cartilage defect and subchondral bone hypersignal. The trochlear osteophyte size also progressed over time. The greatest cartilage loss at 26 weeks was found on the tibial plateaus and in the medial compartment. There was a highly significant correlation between total knee cartilage volume loss or defect and subchondral bone hypersignal, and also a good correlation between the macroscopic and the MRI findings. This study demonstrated that MRI is a useful technology to provide a non-invasive and reliable assessment of the joint structural changes during the development of OA in the ACL dog model. The combination of this OA model with MRI evaluation provides a promising tool for the evaluation of new disease-modifying osteoarthritis drugs (DMOADs).

  12. Using multiple PCR and CE with chemiluminescence detection for simultaneous qualitative and quantitative analysis of genetically modified organism.

    PubMed

    Guo, Longhua; Qiu, Bin; Chi, Yuwu; Chen, Guonan

    2008-09-01

    In this paper, an ultrasensitive CE-CL detection system coupled with a novel double-on-column coaxial flow detection interface was developed for the detection of PCR products. A reliable procedure based on this system had been demonstrated for qualitative and quantitative analysis of genetically modified organism-the detection of Roundup Ready Soy (RRS) samples was presented as an example. The promoter, terminator, function and two reference genes of RRS were amplified with multiplex PCR simultaneously. After that, the multiplex PCR products were labeled with acridinium ester at the 5'-terminal through an amino modification and then analyzed by the proposed CE-CL system. Reproducibility of analysis times and peak heights for the CE-CL analysis were determined to be better than 0.91 and 3.07% (RSD, n=15), respectively, for three consecutive days. It was shown that this method could accurately and qualitatively detect RRS standards and the simulative samples. The evaluation in terms of quantitative analysis of RRS provided by this new method was confirmed by comparing our assay results with those of the standard real-time quantitative PCR (RT-QPCR) using SYBR Green I dyes. The results showed a good coherence between the two methods. This approach demonstrated the possibility for accurate qualitative and quantitative detection of GM plants in a single run.

  13. Technical and clinical view on ambulatory assessment in Parkinson's disease.

    PubMed

    Hobert, M A; Maetzler, W; Aminian, K; Chiari, L

    2014-09-01

    With the progress of technologies of recent years, methods have become available that use wearable sensors and ambulatory systems to measure aspects of--particular axial--motor function. As Parkinson's disease (PD) can be considered a model disorder for motor impairment, a significant number of studies have already been performed with these patients using such techniques. In general, motion sensors such as accelerometers and gyroscopes are used, in combination with lightweight electronics that do not interfere with normal human motion. A fundamental advantage in comparison with usual clinical assessment is that these sensors allow a more quantitative, objective, and reliable evaluation of symptoms; they have also significant advantages compared to in-lab technologies (e.g., optoelectronic motion capture) as they allow long-term monitoring under real-life conditions. In addition, based on recent findings particularly from studies using functional imaging, we learned that non-motor symptoms, specifically cognitive aspects, may be at least indirectly assessable. It is hypothesized that ambulatory quantitative assessment strategies will allow users, clinicians, and scientists in the future to gain more quantitative, unobtrusive, and everyday relevant data out of their clinical evaluation and can also be designed as pervasive (everywhere) and intensive (anytime) tools for ambulatory assessment and even rehabilitation of motor and (partly) non-motor symptoms in PD. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. How important is aspirin adherence when evaluating effectiveness of low-dose aspirin?

    PubMed

    Navaratnam, Kate; Alfirevic, Zarko; Pirmohamed, Munir; Alfirevic, Ana

    2017-12-01

    Low-dose aspirin (LDA) is advocated for women at high-risk of pre-eclampsia, providing a modest, 10%, reduction in risk. Cardiology meta-analyses demonstrate 18% reduction in serious vascular events with LDA. Non-responsiveness to aspirin (sometimes termed aspirin resistance) and variable clinical effectiveness are often attributed to suboptimal adherence. The aim of this review was to identify the scope of adherence assessments in RCTs evaluating aspirin effectiveness in cardiology and obstetrics and discuss the quality of information provided by current methods. We searched MEDLINE, EMBASE and the Cochrane Library, limited to humans and English language, for RCTs evaluating aspirin in cardiology; 14/03/13-13/03/16 and pregnancy 1957-13/03/16. Search terms; 'aspirin', 'acetylsalicylic acid' appearing adjacent to 'myocardial infarction' or 'pregnancy', 'pregnant', 'obstetric' were used. 38% (25/68) of obstetric and 32% (20/62) of cardiology RCTs assessed aspirin adherence and 24% (6/25) and 29% (6/21) of obstetric and cardiology RCTs, respectively, defined acceptable adherence. Semi-quantitative methods (pill counts, medication weighing) prevailed in obstetric RCTs (93%), qualitative methods (interviews, questionnaires) were more frequent in obstetrics (67%). Two obstetric RCTs quantified serum thromboxane B 2 and salicylic acid, but no quantitative methods were used in cardiology Aspirin has proven efficacy, but suboptimal adherence is widespread and difficult to accurately quantify. Little is currently known about aspirin adherence in pregnancy. RCTs evaluating aspirin effectiveness show over-reliance on qualitative adherence assessments vulnerable to inherent inaccuracies. Reliable adherence data is important to assess and optimise the clinical effectiveness of LDA. We propose that adherence should be formally assessed in future trials and that development of quantitative assessments may prove valuable for trial protocols. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.

  16. Body surface posture evaluation: construction, validation and protocol of the SPGAP system (Posture evaluation rotating platform system).

    PubMed

    Schwertner, Debora Soccal; Oliveira, Raul; Mazo, Giovana Zarpellon; Gioda, Fabiane Rosa; Kelber, Christian Roberto; Swarowsky, Alessandra

    2016-05-04

    Several posture evaluation devices have been used to detect deviations of the vertebral column. However it has been observed that the instruments present measurement errors related to the equipment, environment or measurement protocol. This study aimed to build, validate, analyze the reliability and describe a measurement protocol for the use of the Posture Evaluation Rotating Platform System (SPGAP, Brazilian abbreviation). The posture evaluation system comprises a Posture Evaluation Rotating Platform, video camera, calibration support and measurement software. Two pilot studies were carried out with 102 elderly individuals (average age 69 years old, SD = ±7.3) to establish a protocol for SPGAP, controlling the measurement errors related to the environment, equipment and the person under evaluation. Content validation was completed with input from judges with expertise in posture measurement. The variation coefficient method was used to validate the measurement by the instrument of an object with known dimensions. Finally, reliability was established using repeated measurements of the known object. Expert content judges gave the system excellent ratings for content validity (mean 9.4 out of 10; SD 1.13). The measurement of an object with known dimensions indicated excellent validity (all measurement errors <1 %) and test-retest reliability. A total of 26 images were needed to stabilize the system. Participants in the pilot studies indicated that they felt comfortable throughout the assessment. The use of only one image can offer measurements that underestimate or overestimate the reality. To verify the images of objects with known dimensions the values for the width and height were, respectively, CV 0.88 (width) and 2.33 (height), SD 0.22 (width) and 0.35 (height), minimum and maximum values 24.83-25.2 (width) and 14.56 - 15.75 (height). In the analysis of different images (similar) of an individual, greater discrepancies were observed in the values found. The cervical index, for example, presented minimum and maximum values of 15.38 and 37.5, a coefficient of variation of 0.29 and a standard deviation of 6.78. The SPGAP was shown to be a valid and reliable instrument for the quantitative analysis of body posture with applicability and clinical use, since it managed to reduce several measurement errors, amongst which parallax distortion.

  17. Dose limited reliability of quantitative annular dark field scanning transmission electron microscopy for nano-particle atom-counting.

    PubMed

    De Backer, A; Martinez, G T; MacArthur, K E; Jones, L; Béché, A; Nellist, P D; Van Aert, S

    2015-04-01

    Quantitative annular dark field scanning transmission electron microscopy (ADF STEM) has become a powerful technique to characterise nano-particles on an atomic scale. Because of their limited size and beam sensitivity, the atomic structure of such particles may become extremely challenging to determine. Therefore keeping the incoming electron dose to a minimum is important. However, this may reduce the reliability of quantitative ADF STEM which will here be demonstrated for nano-particle atom-counting. Based on experimental ADF STEM images of a real industrial catalyst, we discuss the limits for counting the number of atoms in a projected atomic column with single atom sensitivity. We diagnose these limits by combining a thorough statistical method and detailed image simulations. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE PAGES

    Vaccaro, S.; Gauld, I. C.; Hu, J.; ...

    2018-01-31

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  19. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccaro, S.; Gauld, I. C.; Hu, J.

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  20. Advancing the Fork detector for quantitative spent nuclear fuel verification

    NASA Astrophysics Data System (ADS)

    Vaccaro, S.; Gauld, I. C.; Hu, J.; De Baere, P.; Peterson, J.; Schwalbach, P.; Smejkal, A.; Tomanin, A.; Sjöland, A.; Tobin, S.; Wiarda, D.

    2018-04-01

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations. A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This paper describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. The results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.

  1. HPAEC-PAD quantification of Haemophilus influenzae type b polysaccharide in upstream and downstream samples.

    PubMed

    van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel

    2015-11-27

    Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Comparative study between quantitative digital image analysis and fluorescence in situ hybridization of breast cancer equivocal human epidermal growth factor receptors 2 score 2(+) cases.

    PubMed

    Ayad, Essam; Mansy, Mina; Elwi, Dalal; Salem, Mostafa; Salama, Mohamed; Kayser, Klaus

    2015-01-01

    Optimization of workflow for breast cancer samples with equivocal human epidermal growth factor receptors 2 (HER2)/neu score 2(+) results in routine practice, remains to be a central focus of the on-going efforts to assess HER2 status. According to the College of American Pathologists/American Society of Clinical Oncology guidelines equivocal HER2/neu score 2(+) cases are subject for further testing, usually by fluorescence in situ hybridization (FISH) investigations. It still remains on open question, whether quantitative digital image analysis of HER2 immunohistochemistry (IHC) stained slides can assist in further refining the HER2 score 2(+). To assess utility of quantitative digital analysis of IHC stained slides and compare its performance to FISH in cases of breast cancer with equivocal HER2 score 2(+). Fifteen specimens (previously diagnosed as breast cancer and was evaluated as HER 2(-) score 2(+)) represented the study population. Contemporary new cuts were prepared for re-evaluation of HER2 immunohistochemical studies and FISH examination. All the cases were digitally scanned by iScan (Produced by BioImagene [Now Roche-Ventana]). The IHC signals of HER2 were measured using an automated image analyzing system (MECES, www.Diagnomx.eu/meces). Finally, a comparative study was done between the results of the FISH and the quantitative analysis of the virtual slides. Three out of the 15 cases with equivocal HER2 score 2(+), turned out to be positive (3(+)) by quantitative digital analysis, and 12 were found to be negative in FISH too. Two of these three positive cases proved to be positive with FISH, and only one was negative. Quantitative digital analysis is highly sensitive and relatively specific when compared to FISH in detecting HER2/neu overexpression. Therefore, it represents a potential reliable substitute for FISH in breast cancer cases, which desire further refinement of equivocal IHC results.

  3. Quantitative PCR for Detection and Enumeration of Genetic Markers of Bovine Fecal Pollution

    EPA Science Inventory

    Accurate assessment of health risks associated with bovine (cattle) fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for the detection of two recently described cow feces-spec...

  4. A Validity and Reliability Study of the Attitudes toward Sustainable Development Scale

    ERIC Educational Resources Information Center

    Biasutti, Michele; Frate, Sara

    2017-01-01

    This article describes the development and validation of the Attitudes toward Sustainable Development scale, a quantitative 20-item scale that measures Italian university students' attitudes toward sustainable development. A total of 484 undergraduate students completed the questionnaire. The validity and reliability of the scale was statistically…

  5. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  6. Development and psychometric evaluation of a women shift workers' reproductive health questionnaire: study protocol for a sequential exploratory mixed-method study.

    PubMed

    Nikpour, Maryam; Tirgar, Aram; Ebadi, Abbas; Ghaffari, Fatemeh; Firouzbakht, Mojgan; Hajiahmadi, Mahmod

    2018-02-06

    Although shift works is a certain treat for female reproductive health, but currently, there is no standardized instrument for measuring reproductive health among female shift workers. This study aims to develop and evaluate the psychometric properties of a Women Shift Workers' Reproductive Health Questionnaire (WSW-RHQ). This is a sequential exploratory mixed-method study with a qualitative and a quantitative phase. In the qualitative phase, semi-structured interviews will be held with female shift workers who live in Mazandaran Province, Iran, additionally, the literature review will be performed by searching electronic databases. Sampling will be done in different workplaces and with maximum variation respecting female shift workers' age and job and educational and different economic situation. Interview data will be analyzed using conventional content analysis and then, the primary item pool for the questionnaire will be developed. In the quantitative phase, we will evaluate the psychometric properties of the questionnaire, i.e. its face, content, construct as well as reliability via the internal consistency, stability. Finally, a scoring system will be developed for the questionnaire. The development of WSW-RHQ will facilitate the promotion and implementation of reproductive health interventions and assessment of their effectiveness. Other scholars can cross-culturally adapt and use the questionnaire according to their immediate contexts.

  7. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.

  8. A Comprehensive Histological Assessment of Osteoarthritis Lesions in Mice

    PubMed Central

    McNulty, Margaret A.; Loeser, Richard F.; Davey, Cynthia; Callahan, Michael F.; Ferguson, Cristin M.; Carlson, Cathy S.

    2011-01-01

    Objective: Accurate histological assessment of osteoarthritis (OA) is critical in studies evaluating the effects of interventions on disease severity. The purpose of the present study was to develop a histological grading scheme that comprehensively and quantitatively assesses changes in multiple tissues that are associated with OA of the stifle joint in mice. Design: Two representative midcoronal sections from 158 stifle joints, including naturally occurring and surgically induced OA, were stained with H&E and Safranin-O stains. All slides were evaluated to characterize the changes present. A grading scheme that includes both measurements and semiquantitative scores was developed, and principal components analysis (PCA) was applied to the resulting data from the medial tibial plateaus. A subset of 30 tibial plateaus representing a wide range of severity was then evaluated by 4 observers. Reliability of the results was evaluated using intraclass correlation coefficients (ICCs) and area under the receiver operating characteristic (ROC) curve. Results: Five factors were retained by PCA, accounting for 74% of the total variance. Interobserver and intraobserver reproducibilities for evaluations of articular cartilage and subchondral bone were acceptable. The articular cartilage integrity and chondrocyte viability factor scores were able to distinguish severe OA from normal, minimal, mild, and moderate disease. Conclusion: This newly developed grading scheme and resulting factors characterize a range of joint changes in mouse stifle joints that are associated with OA. Overall, the newly developed scheme is reliable and reproducible, characterizes changes in multiple tissues, and provides comprehensive information regarding a specific site in the stifle joint. PMID:26069594

  9. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  10. [Clinical evaluation of a novel HBsAg quantitative assay].

    PubMed

    Takagi, Kazumi; Tanaka, Yasuhito; Naganuma, Hatsue; Hiramatsu, Kumiko; Iida, Takayasu; Takasaka, Yoshimitsu; Mizokami, Masashi

    2007-07-01

    The clinical implication of the hepatitis B surface antigen (HBsAg) concentrations in HBV-infected individuals remains unclear. The aim of this study was to evaluate a novel fully automated Chemiluminescence Enzyme Immunoassay (Sysmex HBsAg quantitative assay) by comparative measurements of the reference serum samples versus two independent commercial assays (Lumipulse f or Architect HBsAg QT). Furthermore, clinical usefulness was assessed for monitoring of the serum HBsAg levels during antiviral therapy. A dilution test using 5 reference-serum samples showed linear correlation curve in range from 0.03 to 2,360 IU/ml. The HBsAg was measured in total of 400 serum samples and 99.8% had consistent results between Sysmex and Lumipulse f. Additionally, a positive linear correlation was observed between Sysmex and Architect. To compare the Architect and Sysmex, both methods were applied to quantify the HBsAg in serum samples with different HBV genotypes/subgenotypes, as well as in serum contained HBV vaccine escape mutants (126S, 145R). Correlation between the methods was observed in results for escape mutants and common genotypes (A, B, C) in Japan. Observed during lamivudine therapy, an increase in HBsAg and HBV DNA concentrations preceded the aminotransferase (ALT) elevation associated with drug-resistant HBV variant emergence (breakthrough hepatitis). In conclusion, reliability of the Sysmex HBsAg quantitative assay was confirmed for all HBV genetic variants common in Japan. Monitoring of serum HBsAg concentrations in addition to HBV DNA quantification, is helpful in evaluation of the response to lamivudine treatment and diagnosis of the breakthrough hepatitis.

  11. HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.

    PubMed

    Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo

    2014-10-01

    To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Quantitative Analysis of Science and Chemistry Textbooks for Indicators of Reform: A complementary perspective

    NASA Astrophysics Data System (ADS)

    Kahveci, Ajda

    2010-07-01

    In this study, multiple thematically based and quantitative analysis procedures were utilized to explore the effectiveness of Turkish chemistry and science textbooks in terms of their reflection of reform. The themes gender equity, questioning level, science vocabulary load, and readability level provided the conceptual framework for the analyses. An unobtrusive research method, content analysis, was used by coding the manifest content and counting the frequency of words, photographs, drawings, and questions by cognitive level. The context was an undergraduate chemistry teacher preparation program at a large public university in a metropolitan area in northwestern Turkey. Forty preservice chemistry teachers were guided to analyze 10 middle school science and 10 high school chemistry textbooks. Overall, the textbooks included unfair gender representations, a considerably higher number of input and processing than output level questions, and high load of science terminology. The textbooks failed to provide sufficient empirical evidence to be considered as gender equitable and inquiry-based. The quantitative approach employed for evaluation contrasts with a more interpretive approach, and has the potential in depicting textbook profiles in a more reliable way, complementing the commonly employed qualitative procedures. Implications suggest that further work in this line is needed on calibrating the analysis procedures with science textbooks used in different international settings. The procedures could be modified and improved to meet specific evaluation needs. In the Turkish context, next step research may concern the analysis of science textbooks being rewritten for the reform-based curricula to make cross-comparisons and evaluate a possible progression.

  13. An Evaluation Method of Equipment Reliability Configuration Management

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  14. Ability of walking without a walking device in patients with spinal cord injury as determined using data from functional tests

    PubMed Central

    Poncumhak, Puttipong; Saengsuwan, Jiamjit; Amatachaya, Sugalya

    2014-01-01

    Background/Objectives More than half of independent ambulatory patients with spinal cord injury (SCI) need a walking device to promote levels of independence. However, long-lasting use of a walking device may introduce negative impacts for the patients. Using a standard objective test relating to the requirement of a walking device may offer a quantitative criterion to effectively monitor levels of independence of the patients. Therefore, this study investigated (1) ability of the three functional tests, including the five times sit-to-stand test (FTSST), timed up and go test (TUGT), and 10-meter walk test (10MWT) to determine the ability of walking without a walking device, and (2) the inter-tester reliability of the tests to assess functional ability in patients with SCI. Methods Sixty independent ambulatory patients with SCI, who walked with and without a walking device (30 subjects/group), were assessed cross-sectionally for their functional ability using the three tests. The first 20 subjects also participated in the inter-tester reliability test. Results The time required to complete the FTSST <14 seconds, the TUGT < 18 seconds, and the 10MWT < 6 seconds had good-to-excellent capability to determine the ability of walking without a walking device of subjects with SCI. These tests also showed excellent inter-tester reliability. Conclusions Methods of clinical evaluation for walking are likely performed using qualitative observation, which makes the results difficult to compare among testers and test intervals. Findings of this study offer a quantitative target criterion or a clear level of ability that patients with SCI could possibly walk without a walking device, which would benefit monitoring process for the patients. PMID:24621030

  15. The reliability analysis of a separated, dual fail operational redundant strapdown IMU. [inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.

  16. Quantitative Decision Making.

    ERIC Educational Resources Information Center

    Baldwin, Grover H.

    The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…

  17. 76 FR 12140 - Agency Information Collection Activities: Proposed Collection; Comment Request; Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... provides useful insights on perceptions and opinions, but are not statistical surveys that yield quantitative results that can be generalized to the population of study. This feedback will provide insights... used for quantitative information collections that are designed to yield reliably actionable results...

  18. Nuclear electric propulsion operational reliability and crew safety study: NEP systems/modeling report

    NASA Technical Reports Server (NTRS)

    Karns, James

    1993-01-01

    The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.

  19. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  20. IMRT vs. 3D Noncoplanar Treatment Plans for Maxillary Sinus Tumors: A New Tool for Quantitative Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Daphne; Menhel, Janna; Alezra, Dror

    2008-01-01

    We compared 9-field, equispaced intensity modulated radiation therapy (IMRT), 4- to 5-field, directionally optimized IMRT, and 3-dimensional (3D) noncoplanar planning approaches for tumors of the maxillary sinus. Ten patients were planned retrospectively to compare the different treatment techniques. Prescription doses were 60 to 70 Gy. Critical structures contoured included optic nerves and chiasm, lacrimal glands, lenses, and retinas. As an aid for plan assessment, we introduced a new tool: Critical Organ Scoring Index (COSI), which allows quantitative evaluation of the tradeoffs between target coverage and critical organ sparing. This index was compared with other, commonly used conformity indices. For amore » reliable assessment of both tumor coverage and dose to critical organs in the different planning techniques, we introduced a 2D, graphical representation of COSI vs. conformity index (CI). Dose-volume histograms and mean, maximum, and minimum organ doses were also compared. IMRT plans delivered lower doses to ipsilateral structures, but were unable to spare them. 3D plans delivered less dose to contralateral structures, and were more homogeneous, as well. Both IMRT approaches gave similar results. In cases where choice of optimal plan was difficult, the novel 2D COSI-CI representation gave an accurate picture of the tradeoffs between target coverage and organ sparing, even in cases where other conformity indices failed. Due to their unique anatomy, maxillary sinus tumors may benefit more from a noncoplanar approach than from IMRT. The new graphical representation proposed is a quick, visual, reliable tool, which may facilitate the physician's choice of best treatment plan for a given patient.« less

  1. Evaluation and Selection of Candidate Reference Genes for Normalization of Quantitative RT-PCR in Withania somnifera (L.) Dunal

    PubMed Central

    Singh, Varinder; Kaul, Sunil C.; Wadhwa, Renu; Pati, Pratap Kumar

    2015-01-01

    Quantitative real-time PCR (qRT-PCR) is now globally used for accurate analysis of transcripts levels in plants. For reliable quantification of transcripts, identification of the best reference genes is a prerequisite in qRT-PCR analysis. Recently, Withania somnifera has attracted lot of attention due to its immense therapeutic potential. At present, biotechnological intervention for the improvement of this plant is being seriously pursued. In this background, it is important to have comprehensive studies on finding suitable reference genes for this high valued medicinal plant. In the present study, 11 candidate genes were evaluated for their expression stability under biotic (fungal disease), abiotic (wounding, salt, drought, heat and cold) stresses, in different plant tissues and in response to various plant growth regulators (methyl jasmonate, salicylic acid, abscisic acid). The data as analyzed by various software packages (geNorm, NormFinder, Bestkeeper and ΔCt method) suggested that cyclophilin (CYP) is a most stable gene under wounding, heat, methyl jasmonate, different tissues and all stress conditions. T-SAND was found to be a best reference gene for salt and salicylic acid (SA) treated samples, while 26S ribosomal RNA (26S), ubiquitin (UBQ) and beta-tubulin (TUB) were the most stably expressed genes under drought, biotic and cold treatment respectively. For abscisic acid (ABA) treated samples 18S-rRNA was found to stably expressed gene. Finally, the relative expression level of the three genes involved in the withanolide biosynthetic pathway was detected to validate the selection of reliable reference genes. The present work will significantly contribute to gene analysis studies in W. somnifera and facilitate in improving the quality of gene expression data in this plant as well as and other related plant species. PMID:25769035

  2. Comparison of array comparative genomic hybridization and quantitative real-time PCR-based aneuploidy screening of blastocyst biopsies.

    PubMed

    Capalbo, Antonio; Treff, Nathan R; Cimadomo, Danilo; Tao, Xin; Upham, Kathleen; Ubaldi, Filippo Maria; Rienzi, Laura; Scott, Richard T

    2015-07-01

    Comprehensive chromosome screening (CCS) methods are being extensively used to select chromosomally normal embryos in human assisted reproduction. Some concerns related to the stage of analysis and which aneuploidy screening method to use still remain. In this study, the reliability of blastocyst-stage aneuploidy screening and the diagnostic performance of the two mostly used CCS methods (quantitative real-time PCR (qPCR) and array comparative genome hybridization (aCGH)) has been assessed. aCGH aneuploid blastocysts were rebiopsied, blinded, and evaluated by qPCR. Discordant cases were subsequently rebiopsied, blinded, and evaluated by single-nucleotide polymorphism (SNP) array-based CCS. Although 81.7% of embryos showed the same diagnosis when comparing aCGH and qPCR-based CCS, 18.3% (22/120) of embryos gave a discordant result for at least one chromosome. SNP array reanalysis showed that a discordance was reported in ten blastocysts for aCGH, mostly due to false positives, and in four cases for qPCR. The discordant aneuploidy call rate per chromosome was significantly higher for aCGH (5.7%) compared with qPCR (0.6%; P<0.01). To corroborate these findings, 39 embryos were simultaneously biopsied for aCGH and qPCR during blastocyst-stage aneuploidy screening cycles. 35 matched including all 21 euploid embryos. Blinded SNP analysis on rebiopsies of the four embryos matched qPCR. These findings demonstrate the high reliability of diagnosis performed at the blastocyst stage with the use of different CCS methods. However, the application of aCGH can be expected to result in a higher aneuploidy rate than other contemporary methods of CCS.

  3. Reliability of the rapid bedside whole-blood quantitative cardiac troponin T assay in the diagnosis of myocardial injury in patients with acute coronary syndrome.

    PubMed

    Saadeddin, Salam; Habbab, Mohammed; Siddieg, Hisham; Fayomi, Mahmoud; Dafterdar, Rofaida

    2004-03-01

    A rapid bedside whole-blood quantitative cTnT assay has recently been developed. We evaluated the reliability of this test for the diagnosis of myocardial injury in patients with acute coronary syndrome (ACS). Whole-blood cTnT levels were measured in 96 patients with ACS using the Roche Cardiac Reader(R) rapid bedside assay device, and the results were compared with serum cTnT levels in the same patients measured by the Roche Elecsys(R) Immunoanalyzer. There were 50 patients with clinical evidence of myocardial injury and 56 without. From the qualitative point of view (reporting negative or positive tests), the results of the rapid bedside tests were identical to those obtained by the serum immunoanalyzer. From quantitative the point of view, the rapid bedside tests could not measure exact values below 0.1 ng/ml (reported negative) or above 2.0 ng/ml (reported >2.0). The measurements made by the rapid bedside tests within the range of 0.1 to 2.0 ng/ml correlated well with those of the serum immunoanalyzer (Cardiac Reader(R) cTnT=0.61, Elecsys(R) cTnT+0.12; r=0.88), but their mean values were significantly lower (1.20I0.71 vs. 1.41I1.03, p=0.0007). The rapid bedside cTnT assay correlates well with immunoanalyzer measurements between the values of 0.1 and 2.0 ng/ml. However, they tend to give significantly lower values and fail to give exact values below 0.1 and above 2.0 ng/ml, which may affect their performance in monitoring and managing patients with ACS, and limit their use in predicting outcome.

  4. A brief update on physical and optical disector applications and sectioning-staining methods in neuroscience.

    PubMed

    Yurt, Kıymet Kübra; Kivrak, Elfide Gizem; Altun, Gamze; Mohamed, Hamza; Ali, Fathelrahman; Gasmalla, Hosam Eldeen; Kaplan, Suleyman

    2018-02-26

    A quantitative description of a three-dimensional (3D) object based on two-dimensional images can be made using stereological methods These methods involve unbiased approaches and provide reliable results with quantitative data. The quantitative morphology of the nervous system has been thoroughly researched in this context. In particular, various novel methods such as design-based stereological approaches have been applied in neuoromorphological studies. The main foundations of these methods are systematic random sampling and a 3D approach to structures such as tissues and organs. One key point in these methods is that selected samples should represent the entire structure. Quantification of neurons, i.e. particles, is important for revealing degrees of neurodegeneration and regeneration in an organ or system. One of the most crucial morphometric parameters in biological studies is thus the "number". The disector counting method introduced by Sterio in 1984 is an efficient and reliable solution for particle number estimation. In order to obtain precise results by means of stereological analysis, counting items should be seen clearly in the tissue. If an item in the tissue cannot be seen, these cannot be analyzed even using unbiased stereological techniques. Staining and sectioning processes therefore play a critical role in stereological analysis. The purpose of this review is to evaluate current neuroscientific studies using optical and physical disector counting methods and to discuss their definitions and methodological characteristics. Although the efficiency of the optical disector method in light microscopic studies has been revealed in recent years, the physical disector method is more easily performed in electron microscopic studies. Also, we offered to readers summaries of some common basic staining and sectioning methods, which can be used for stereological techniques in this review. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Evaluation of differential disaccharide excretion in urine for non-invasive investigation of altered intestinal disaccharidase activity caused by alpha-glucosidase inhibition, primary hypolactasia, and coeliac disease.

    PubMed Central

    Bjarnason, I; Batt, R; Catt, S; Macpherson, A; Maxton, D; Menzies, I S

    1996-01-01

    BACKGROUND/AIM: The reliability of a quantitative method for the non-invasive assessment of intestinal disaccharide hydrolysis was assessed. METHODS: Differential excretion of intact disaccharide, expressed as ratios of lactulose to appropriate hydrolysable disaccharides in urine collected following combined ingestion, has been investigated in healthy volunteers with drug induced alpha-glucosidase inhibition, in subjects with primary hypolactasia, and patients with coeliac disease. RESULTS: Oral administration of the alpha-glucosidase inhibitor 'Acarbose' (BAY g 5421, 200 mg) together with sucrose and lactulose increased the urinary sucrose/lactulose excretion ratios (% dose/10 h) fivefold. The effect was quantitatively reproducible, a higher dose of 'Acarbose' (500 mg) increasing the excretion ratio to about 1.0 indicating complete inhibition of intestinal sucrase activity. The suitability of the method for measuring differences in dose/response and duration of action was assessed by comparing three different alpha-glucosidase inhibitors (BAY g 5421, BAY m 1099, and BAY o 1248) and found to be satisfactory. Subjects with primary adult hypolactasia had urine lactose/lactulose excretion ratios raised to values indicating reduced rather than complete absence of lactase activity whereas sucrose/lactulose ratios were not significantly affected. 'Whole' intestinal disaccharidase activity assessed by this method demonstrated impairment of lactase, sucrase, and isomaltase in eight, one, and seven, respectively, of 20 patients with coeliac disease. By contrast in vitro assay of jejunal biopsy tissue indicated pan-disaccharidase deficiency in all but five of these patients. This shows the importance of distinguishing between 'local' and 'whole' intestinal performance. CONCLUSIONS: Differential urinary excretion of ingested disaccharides provides a reliable, quantitative, and non-invasive technique for assessing profiles of intestinal disaccharidase activity. PMID:8949640

  6. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability

    PubMed Central

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student’s performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and students could be surveyed to a limited extent. There are indications that the assessment can be improved by integrating more information on medical context and student self-assessments. Despite the current limitations regarding test criteria, it is clear that workplace-based assessment of communication skills in the clinical setting is a valuable addition to the communication curricula of medical schools. PMID:27990466

  7. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability.

    PubMed

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student's performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and students could be surveyed to a limited extent. There are indications that the assessment can be improved by integrating more information on medical context and student self-assessments. Despite the current limitations regarding test criteria, it is clear that workplace-based assessment of communication skills in the clinical setting is a valuable addition to the communication curricula of medical schools.

  8. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    PubMed

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  9. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368

  10. Identification and evaluation of reference genes for qRT-PCR normalization in Ganoderma lucidum.

    PubMed

    Xu, Jiang; Xu, ZhiChao; Zhu, YingJie; Luo, HongMei; Qian, Jun; Ji, AiJia; Hu, YuanLei; Sun, Wei; Wang, Bo; Song, JingYuan; Sun, Chao; Chen, ShiLin

    2014-01-01

    Quantitative real-time reverse transcription PCR (qRT-PCR) is a rapid, sensitive, and reliable technique for gene expression studies. The accuracy and reliability of qRT-PCR results depend on the stability of the reference genes used for gene normalization. Therefore, a systematic process of reference gene evaluation is needed. Ganoderma lucidum is a famous medicinal mushroom in East Asia. In the current study, 10 potential reference genes were selected from the G. lucidum genomic data. The sequences of these genes were manually curated, and primers were designed following strict criteria. The experiment was conducted using qRT-PCR, and the stability of each candidate gene was assessed using four commonly used statistical programs-geNorm, NormFinder, BestKeeper, and RefFinder. According to our results, PP2A was expressed at the most stable levels under different fermentation conditions, and RPL4 was the most stably expressed gene in different tissues. RPL4, PP2A, and β-tubulin are the most commonly recommended reference genes for normalizing gene expression in the entire sample set. The current study provides a foundation for the further use of qRT-PCR in G. lucidum gene analysis.

  11. Evaluation of the stability of reference genes in bone mesenchymal stem cells from patients with avascular necrosis of the femoral head.

    PubMed

    Wang, X N; Yang, Q W; Du, Z W; Yu, T; Qin, Y G; Song, Y; Xu, M; Wang, J C

    2016-05-25

    This study aimed to evaluate 12 genes (18S, GAPDH, B2M, ACTB, ALAS1, GUSB, HPRT1, PBGD, PPIA, PUM1, RPL29, and TBP) for their reliability and stability as reference sequences for real-time quantitative PCR (RT-qPCR) in bone marrow-derived mesenchymal stem cells (BMSCs) isolated from patients with avascular necrosis of the femoral head (ANFH). BMSCs were isolated from 20 ANFH patients divided into four groups according to etiology, and four donors with femoral neck fractures. Total RNA was isolated from BMSCs and reverse transcribed into complementary DNA, which served as a template for RT-qPCR. Three commonly used programs were then used to analyze the results. Reference gene expression varied within each group, between specific groups, and among all five groups. Based on comparisons of all five groups, two of the programs used suggested that HPRT1 was the most stable reference gene, while 18S and ACTB were the most variable. Among the 12 candidate reference genes, HPRT1 exhibited the greatest reliability, followed by PPIA. Thus, these sequences could be used as references for the normalization of RT-qPCR results.

  12. Implications of Transitioning from De Facto to Engineered Water Reuse for Power Plant Cooling.

    PubMed

    Barker, Zachary A; Stillwell, Ashlynn S

    2016-05-17

    Thermoelectric power plants demand large quantities of cooling water, and can use alternative sources like treated wastewater (reclaimed water); however, such alternatives generate many uncertainties. De facto water reuse, or the incidental presence of wastewater effluent in a water source, is common at power plants, representing baseline conditions. In many cases, power plants would retrofit open-loop systems to cooling towers to use reclaimed water. To evaluate the feasibility of reclaimed water use, we compared hydrologic and economic conditions at power plants under three scenarios: quantified de facto reuse, de facto reuse with cooling tower retrofits, and modeled engineered reuse conditions. We created a genetic algorithm to estimate costs and model optimal conditions. To assess power plant performance, we evaluated reliability metrics for thermal variances and generation capacity loss as a function of water temperature. Applying our analysis to the greater Chicago area, we observed high de facto reuse for some power plants and substantial costs for retrofitting to use reclaimed water. Conversely, the gains in reliability and performance through engineered reuse with cooling towers outweighed the energy investment in reclaimed water pumping. Our analysis yields quantitative results of reclaimed water feasibility and can inform sustainable management of water and energy.

  13. Binding free energy predictions of farnesoid X receptor (FXR) agonists using a linear interaction energy (LIE) approach with reliability estimation: application to the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.

    2018-01-01

    Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.

  14. Nondestructive Evaluation of Adhesive Bonds via Ultrasonic Phase Measurements

    NASA Technical Reports Server (NTRS)

    Haldren, Harold A.; Perey, Daniel F.; Yost, William T.; Cramer, K. Elliott; Gupta, Mool C.

    2016-01-01

    The use of advanced composites utilizing adhesively bonded structures offers advantages in weight and cost for both the aerospace and automotive industries. Conventional nondestructive evaluation (NDE) has proved unable to reliably detect weak bonds or bond deterioration during service life conditions. A new nondestructive technique for quantitatively measuring adhesive bond strength is demonstrated. In this paper, an ultrasonic technique employing constant frequency pulsed phased-locked loop (CFPPLL) circuitry to monitor the phase response of a bonded structure from change in thermal stress is discussed. Theoretical research suggests that the thermal response of a bonded interface relates well with the quality of the adhesive bond. In particular, the effective stiffness of the adhesive-adherent interface may be extracted from the thermal phase response of the structure. The sensitivity of the CFPPLL instrument allows detection of bond pathologies that have been previously difficult-to-detect. Theoretical results with this ultrasonic technique on single epoxy lap joint (SLJ) specimens are presented and discussed. This technique has the potential to advance the use of adhesive bonds - and by association, advanced composite structures - by providing a reliable method to measure adhesive bond strength, thus permitting more complex, lightweight, and safe designs.

  15. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  16. Metrological reliability of optical coherence tomography in biomedical applications

    NASA Astrophysics Data System (ADS)

    Goloni, C. M.; Temporão, G. P.; Monteiro, E. C.

    2013-09-01

    Optical coherence tomography (OCT) has been proving to be an efficient diagnostics technique for imaging in vivo tissues, an optical biopsy with important perspectives as a diagnostic tool for quantitative characterization of tissue structures. Despite its established clinical use, there is no international standard to address the specific requirements for basic safety and essential performance of OCT devices for biomedical imaging. The present work studies the parameters necessary for conformity assessment of optoelectronics equipment used in biomedical applications like Laser, Intense Pulsed Light (IPL), and OCT, targeting to identify the potential requirements to be considered in the case of a future development of a particular standard for OCT equipment. In addition to some of the particular requirements standards for laser and IPL, also applicable for metrological reliability analysis of OCT equipment, specific parameters for OCT's evaluation have been identified, considering its biomedical application. For each parameter identified, its information on the accompanying documents and/or its measurement has been recommended. Among the parameters for which the measurement requirement was recommended, including the uncertainty evaluation, the following are highlighted: optical radiation output, axial and transverse resolution, pulse duration and interval, and beam divergence.

  17. An investigation of the clinical use of the house-tree-person projective drawings in the psychological evaluation of child sexual abuse.

    PubMed

    Palmer, L; Farrar, A R; Valle, M; Ghahary, N; Panella, M; DeGraw, D

    2000-05-01

    Identification and evaluation of child sexual abuse is an integral task for clinicians. To aid these processes, it is necessary to have reliable and valid psychological measures. This is an investigation of the clinical validity and use of the House-Tree-Person (HTP) projective drawing, a widely used diagnostic tool, in the assessment of child sexual abuse. HTP drawings were collected archivally from a sample of sexually abused children (n = 47) and a nonabused comparison sample (n = 82). The two samples were grossly matched for gender, ethnicity, age, and socioeconomic status. The protocols were scored using a quantitative scoring system. The data were analyzed using a discriminant function analysis. Group membership could not be predicted based on a total HTP score.

  18. A photometric high-throughput method for identification of electrochemically active bacteria using a WO3 nanocluster probe.

    PubMed

    Yuan, Shi-Jie; He, Hui; Sheng, Guo-Ping; Chen, Jie-Jie; Tong, Zhong-Hua; Cheng, Yuan-Yuan; Li, Wen-Wei; Lin, Zhi-Qi; Zhang, Feng; Yu, Han-Qing

    2013-01-01

    Electrochemically active bacteria (EAB) are ubiquitous in environment and have important application in the fields of biogeochemistry, environment, microbiology and bioenergy. However, rapid and sensitive methods for EAB identification and evaluation of their extracellular electron transfer ability are still lacking. Herein we report a novel photometric method for visual detection of EAB by using an electrochromic material, WO(3) nanoclusters, as the probe. This method allowed a rapid identification of EAB within 5 min and a quantitative evaluation of their extracellular electron transfer abilities. In addition, it was also successfully applied for isolation of EAB from environmental samples. Attributed to its rapidness, high reliability, easy operation and low cost, this method has high potential for practical implementation of EAB detection and investigations.

  19. Quantitative Accelerated Life Testing of MEMS Accelerometers

    PubMed Central

    Bâzu, Marius; Gălăţeanu, Lucian; Ilian, Virgil Emil; Loicq, Jerome; Habraken, Serge; Collette, Jean-Paul

    2007-01-01

    Quantitative Accelerated Life Testing (QALT) is a solution for assessing the reliability of Micro Electro Mechanical Systems (MEMS). A procedure for QALT is shown in this paper and an attempt to assess the reliability level for a batch of MEMS accelerometers is reported. The testing plan is application-driven and contains combined tests: thermal (high temperature) and mechanical stress. Two variants of mechanical stress are used: vibration (at a fixed frequency) and tilting. Original equipment for testing at tilting and high temperature is used. Tilting is appropriate as application-driven stress, because the tilt movement is a natural environment for devices used for automotive and aerospace applications. Also, tilting is used by MEMS accelerometers for anti-theft systems. The test results demonstrated the excellent reliability of the studied devices, the failure rate in the “worst case” being smaller than 10-7h-1. PMID:28903265

  20. Qualitative Evaluation Methods in Ethics Education: A Systematic Review and Analysis of Best Practices.

    PubMed

    Watts, Logan L; Todd, E Michelle; Mulhearn, Tyler J; Medeiros, Kelsey E; Mumford, Michael D; Connelly, Shane

    2017-01-01

    Although qualitative research offers some unique advantages over quantitative research, qualitative methods are rarely employed in the evaluation of ethics education programs and are often criticized for a lack of rigor. This systematic review investigated the use of qualitative methods in studies of ethics education. Following a review of the literature in which 24 studies were identified, each study was coded based on 16 best practices characteristics in qualitative research. General thematic analysis and grounded theory were found to be the dominant approaches used. Researchers are effectively executing a number of best practices, such as using direct data sources, structured data collection instruments, non-leading questioning, and expert raters. However, other best practices were rarely present in the courses reviewed, such as collecting data using multiple sources, methods, raters, and timepoints, evaluating reliability, and employing triangulation analyses to assess convergence. Recommendations are presented for improving future qualitative research studies in ethics education.

  1. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  2. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  3. Measuring the reliability and validity of the Greek edition of the Diabetes Quality of Life Brief Clinical Inventory.

    PubMed

    Rekleiti, Maria; Souliotis, Kyriakos; Sarafis, Pavlos; Kyriazis, Ioannis; Tsironi, Maria

    2018-06-01

    The present study focuses on studying the validity and reliability of the Greek edition of DQOL-BCI. DQOL-BCI includes 15 questions-elements that are evaluated on a 5-grade scale like Likert and two general form-shapes. The translation process was conducted in conformity with the guidelines of EuroQol group. A non-random sample of 65 people-patients diagnosed with diabetes I and II was selected. The questionnaire that was used to collect the data was the translated version of DQOL-BCI, and included the demographic characteristics of the interviewees. The content validity of DQOL-BCI was re-examined from a team of five experts (expert panel) for qualitative and quantitative performance. The completion of the questionnaire was done via a personal interview. The sample consisted of 58 people (35 men and 23 women, 59.9 ± 10.9 years). The translation of the questionnaire was found appropriate in accordance to the peculiarities of the Greek language and culture. The largest deviation of values is observed in QOL1 (1.71) in comparison to QOL6 (2.98). The difference between the standard deviations is close to 0.6. The statistics results of the tests showed satisfactory content validity and high construct validity, while the high values for Cronbach alpha index (0.95) reveal high reliability and internal consistency. The Greek version of DQOL-BCI has acceptable psychometric properties and appears to demonstrate high internal reliability and satisfactory construct validity, which allows its use as an important tool in evaluating the quality of life of diabetic patients in relation to their health. Copyright © 2018. Published by Elsevier B.V.

  4. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  5. Selection of reliable reference genes for gene expression studies in Trichoderma afroharzianum LTR-2 under oxalic acid stress.

    PubMed

    Lyu, Yuping; Wu, Xiaoqing; Ren, He; Zhou, Fangyuan; Zhou, Hongzi; Zhang, Xinjian; Yang, Hetong

    2017-10-01

    An appropriate reference gene is required to get reliable results from gene expression analysis by quantitative real-time reverse transcription PCR (qRT-PCR). In order to identify stable and reliable reference genes in Trichoderma afroharzianum under oxalic acid (OA) stress, six commonly used housekeeping genes, i.e., elongation factor 1, ubiquitin, ubiquitin-conjugating enzyme, glyceraldehyde-3-phosphate dehydrogenase, α-tubulin, actin, from the effective biocontrol isolate T. afroharzianum strain LTR-2 were tested for their expression during growth in liquid culture amended with OA. Four in silico programs (comparative ΔCt, NormFinder, geNorm and BestKeeper) were used to evaluate the expression stabilities of six candidate reference genes. The elongation factor 1 gene EF-1 was identified as the most stably expressed reference gene, and was used as the normalizer to quantify the expression level of the oxalate decarboxylase coding gene OXDC in T. afroharzianum strain LTR-2 under OA stress. The result showed that the expression of OXDC was significantly up-regulated as expected. This study provides an effective method to quantify expression changes of target genes in T. afroharzianum under OA stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Analysis instrument test on mathematical power the material geometry of space flat side for grade 8

    NASA Astrophysics Data System (ADS)

    Kusmaryono, Imam; Suyitno, Hardi; Dwijanto, Karomah, Nur

    2017-08-01

    The main problem of research to determine the quality of test items on the material side of flat geometry to assess students' mathematical power. The method used is quantitative descriptive. The subjects were students of class 8 as many as 20 students. The object of research is the quality of test items in terms of the power of mathematics: validity, reliability, level of difficulty and power differentiator. Instrument mathematical power ratings are tested include: written tests and questionnaires about the disposition of mathematical power. Data were obtained from the field, in the form of test data on the material geometry of space flat side and questionnaires. The results of the test instrument to the reliability of the test item is influenced by many factors. Factors affecting the reliability of the instrument is the number of items, homogeneity test questions, the time required, the uniformity of conditions of the test taker, the homogeneity of the group, the variability problem, and motivation of the individual (person taking the test). Overall, the evaluation results of this study stated that the test instrument can be used as a tool to measure students' mathematical power.

  7. Factor Analytic Validation of the Ford, Wolvin, and Chung Listening Competence Scale

    ERIC Educational Resources Information Center

    Mickelson, William T.; Welch, S. A.

    2012-01-01

    This research begins to independently and quantitatively validate the Ford, Wolvin, and Chung (2000) Listening Competency Scale. Reliability and Confirmatory Factor analyses were conducted on two independent samples. The reliability estimates were found to be below those reported by Ford, Wolvin, and Chung (2000) and below acceptable levels for…

  8. New horizons in mouse immunoinformatics: reliable in silico prediction of mouse class I histocompatibility major complex peptide binding affinity.

    PubMed

    Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R

    2004-11-21

    Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).

  9. Quantitation by Portable Gas Chromatography: Mass Spectrometry of VOCs Associated with Vapor Intrusion

    PubMed Central

    Fair, Justin D.; Bailey, William F.; Felty, Robert A.; Gifford, Amy E.; Shultes, Benjamin; Volles, Leslie H.

    2010-01-01

    Development of a robust reliable technique that permits for the rapid quantitation of volatile organic chemicals is an important first step to remediation associated with vapor intrusion. This paper describes the development of an analytical method that allows for the rapid and precise identification and quantitation of halogenated and nonhalogenated contaminants commonly found within the ppbv level at sites where vapor intrusion is a concern. PMID:20885969

  10. Planning Robot-Control Parameters With Qualitative Reasoning

    NASA Technical Reports Server (NTRS)

    Peters, Stephen F.

    1993-01-01

    Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.

  11. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  12. The Focinator v2-0 - Graphical Interface, Four Channels, Colocalization Analysis and Cell Phase Identification.

    PubMed

    Oeck, Sebastian; Malewicz, Nathalie M; Hurst, Sebastian; Al-Refae, Klaudia; Krysztofiak, Adam; Jendrossek, Verena

    2017-07-01

    The quantitative analysis of foci plays an important role in various cell biological methods. In the fields of radiation biology and experimental oncology, the effect of ionizing radiation, chemotherapy or molecularly targeted drugs on DNA damage induction and repair is frequently performed by the analysis of protein clusters or phosphorylated proteins recruited to so called repair foci at DNA damage sites, involving for example γ-H2A.X, 53BP1 or RAD51. We recently developed "The Focinator" as a reliable and fast tool for automated quantitative and qualitative analysis of nuclei and DNA damage foci. The refined software is now even more user-friendly due to a graphical interface and further features. Thus, we included an R-script-based mode for automated image opening, file naming, progress monitoring and an error report. Consequently, the evaluation no longer required the attendance of the operator after initial parameter definition. Moreover, the Focinator v2-0 is now able to perform multi-channel analysis of four channels and evaluation of protein-protein colocalization by comparison of up to three foci channels. This enables for example the quantification of foci in cells of a specific cell cycle phase.

  13. Development and Evaluation of a Parallel Reaction Monitoring Strategy for Large-Scale Targeted Metabolomics Quantification.

    PubMed

    Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin

    2016-04-19

    Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.

  14. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  15. Quantitative multi-modal NDT data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heideklang, René; Shokouhi, Parisa

    2014-02-18

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less

  16. A Statistics-based Platform for Quantitative N-terminome Analysis and Identification of Protease Cleavage Products*

    PubMed Central

    auf dem Keller, Ulrich; Prudova, Anna; Gioia, Magda; Butler, Georgina S.; Overall, Christopher M.

    2010-01-01

    Terminal amine isotopic labeling of substrates (TAILS), our recently introduced platform for quantitative N-terminome analysis, enables wide dynamic range identification of original mature protein N-termini and protease cleavage products. Modifying TAILS by use of isobaric tag for relative and absolute quantification (iTRAQ)-like labels for quantification together with a robust statistical classifier derived from experimental protease cleavage data, we report reliable and statistically valid identification of proteolytic events in complex biological systems in MS2 mode. The statistical classifier is supported by a novel parameter evaluating ion intensity-dependent quantification confidences of single peptide quantifications, the quantification confidence factor (QCF). Furthermore, the isoform assignment score (IAS) is introduced, a new scoring system for the evaluation of single peptide-to-protein assignments based on high confidence protein identifications in the same sample prior to negative selection enrichment of N-terminal peptides. By these approaches, we identified and validated, in addition to known substrates, low abundance novel bioactive MMP-2 targets including the plasminogen receptor S100A10 (p11) and the proinflammatory cytokine proEMAP/p43 that were previously undescribed. PMID:20305283

  17. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    PubMed

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Identification and evaluation of reference genes for qRT-PCR studies in Lentinula edodes

    PubMed Central

    Qin, Peng; He, Maolan; Yu, Xiumei; Zhao, Ke; Zhang, Xiaoping; Ma, Menggen; Chen, Qiang; Chen, Xiaoqiong; Zeng, Xianfu; Gu, Yunfu

    2018-01-01

    Lentinula edodes (shiitake mushroom) is a common edible mushroom with a number of potential therapeutic and nutritional applications. It contains various medically important molecules, such as polysaccharides, terpenoids, sterols, and lipids, were contained in this mushroom. Quantitative real-time polymerase chain reaction (qRT-PCR) is a powerful tool to analyze the mechanisms underlying the biosynthetic pathways of these substances. qRT-PCR is used for accurate analyses of transcript levels owing to its rapidity, sensitivity, and reliability. However, its accuracy and reliability for the quantification of transcripts rely on the expression stability of the reference genes used for data normalization. To ensure the reliability of gene expression analyses using qRT-PCR in L. edodes molecular biology research, it is necessary to systematically evaluate reference genes. In the current study, ten potential reference genes were selected from L. edodes genomic data and their expression levels were measured by qRT-PCR using various samples. The expression stability of each candidate gene was analyzed by three commonly used software packages: geNorm, NormFinder, and BestKeeper. Base on the results, Rpl4 was the most stable reference gene across all experimental conditions, and Atu was the most stable gene among strains. 18S was found to be the best reference gene for different development stages, and Rpl4 was the most stably expressed gene under various nutrient conditions. The present work will contribute to qRT-PCR studies in L. edodes. PMID:29293626

  19. Identification and evaluation of reference genes for qRT-PCR studies in Lentinula edodes.

    PubMed

    Xiang, Quanju; Li, Jin; Qin, Peng; He, Maolan; Yu, Xiumei; Zhao, Ke; Zhang, Xiaoping; Ma, Menggen; Chen, Qiang; Chen, Xiaoqiong; Zeng, Xianfu; Gu, Yunfu

    2018-01-01

    Lentinula edodes (shiitake mushroom) is a common edible mushroom with a number of potential therapeutic and nutritional applications. It contains various medically important molecules, such as polysaccharides, terpenoids, sterols, and lipids, were contained in this mushroom. Quantitative real-time polymerase chain reaction (qRT-PCR) is a powerful tool to analyze the mechanisms underlying the biosynthetic pathways of these substances. qRT-PCR is used for accurate analyses of transcript levels owing to its rapidity, sensitivity, and reliability. However, its accuracy and reliability for the quantification of transcripts rely on the expression stability of the reference genes used for data normalization. To ensure the reliability of gene expression analyses using qRT-PCR in L. edodes molecular biology research, it is necessary to systematically evaluate reference genes. In the current study, ten potential reference genes were selected from L. edodes genomic data and their expression levels were measured by qRT-PCR using various samples. The expression stability of each candidate gene was analyzed by three commonly used software packages: geNorm, NormFinder, and BestKeeper. Base on the results, Rpl4 was the most stable reference gene across all experimental conditions, and Atu was the most stable gene among strains. 18S was found to be the best reference gene for different development stages, and Rpl4 was the most stably expressed gene under various nutrient conditions. The present work will contribute to qRT-PCR studies in L. edodes.

  20. Experimental Null Method to Guide the Development of Technical Procedures and to Control False-Positive Discovery in Quantitative Proteomics.

    PubMed

    Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun

    2015-10-02

    Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.

  1. Development of a patient safety climate survey for Chinese hospitals: cross-national adaptation and psychometric evaluation.

    PubMed

    Zhu, Junya; Li, Liping; Zhao, Hailei; Han, Guangshu; Wu, Albert W; Weingart, Saul N

    2014-10-01

    Existing patient safety climate instruments, most of which have been developed in the USA, may not accurately reflect the conditions in the healthcare systems of other countries. To develop and evaluate a patient safety climate instrument for healthcare workers in Chinese hospitals. Based on a review of existing instruments, expert panel review, focus groups and cognitive interviews, we developed items relevant to patient safety climate in Chinese hospitals. The draft instrument was distributed to 1700 hospital workers from 54 units in six hospitals in five Chinese cities between July and October 2011, and 1464 completed surveys were received. We performed exploratory and confirmatory factor analyses and estimated internal consistency reliability, within-unit agreement, between-unit variation, unit-mean reliability, correlation between multi-item composites, and association between the composites and two single items of perceived safety. The final instrument included 34 items organised into nine composites: institutional commitment to safety, unit management support for safety, organisational learning, safety system, adequacy of safety arrangements, error reporting, communication and peer support, teamwork and staffing. All composites had acceptable unit-mean reliabilities (≥0.74) and within-unit agreement (Rwg ≥0.71), and exhibited significant between-unit variation with intraclass correlation coefficients ranging from 9% to 21%. Internal consistency reliabilities ranged from 0.59 to 0.88 and were ≥0.70 for eight of the nine composites. Correlations between composites ranged from 0.27 to 0.73. All composites were positively and significantly associated with the two perceived safety items. The Chinese Hospital Survey on Patient Safety Climate demonstrates adequate dimensionality, reliability and validity. The integration of qualitative and quantitative methods is essential to produce an instrument that is culturally appropriate for Chinese hospitals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Quantitative Evaluation of Aged AISI 316L Stainless Steel Sensitization to Intergranular Corrosion: Comparison Between Microstructural Electrochemical and Analytical Methods

    NASA Astrophysics Data System (ADS)

    Sidhom, H.; Amadou, T.; Sahlaoui, H.; Braham, C.

    2007-06-01

    The evaluation of the degree of sensitization (DOS) to intergranular corrosion (IGC) of a commercial AISI 316L austenitic stainless steel aged at temperatures ranging from 550 °C to 800 °C during 100 to 80,000 hours was carried out using three different assessment methods. (1) The microstructural method coupled with the Strauss standard test (ASTM A262). This method establishes the kinetics of the precipitation phenomenon under different aging conditions, by transmission electronic microscope (TEM) examination of thin foils and electron diffraction. The subsequent chromium-depleted zones are characterized by X-ray microanalysis using scanning transmission electronic microscope (STEM). The superimposition of microstructural time-temperature-precipitation (TTP) and ASTM A262 time-temperature-sensitization (TTS) diagrams provides the relationship between aged microstructure and IGC. Moreover, by considering the chromium-depleted zone characteristics, sensitization and desensitization criteria could be established. (2) The electrochemical method involving the double loop-electrochemical potentiokinetic reactivation (DL-EPR) test. The operating conditions of this test were initially optimized using the experimental design method on the bases of the reliability, the selectivity, and the reproducibility of test responses for both annealed and sensitized steels. The TTS diagram of the AISI 316L stainless steel was established using this method. This diagram offers a quantitative assessment of the DOS and a possibility to appreciate the time-temperature equivalence of the IGC sensitization and desensitization. (3) The analytical method based on the chromium diffusion models. Using the IGC sensitization and desensitization criteria established by the microstructural method, numerical solving of the chromium diffusion equations leads to a calculated AISI 316L TTS diagram. Comparison of these three methods gives a clear advantage to the nondestructive DL-EPR test when it is used with its optimized operating conditions. This quantitative method is simple to perform; it is fast, reliable, economical, and presents the best ability to detect the lowest DOS to IGC. For these reasons, this method can be considered as a serious candidate for IGC checking of stainless steel components of industrial plants.

  3. Impaired limb position sense after stroke: a quantitative test for clinical use.

    PubMed

    Carey, L M; Oke, L E; Matyas, T A

    1996-12-01

    A quantitative measure of wrist position sense was developed to advance clinical measurement of proprioceptive limb sensibility after stroke. Test-retest reliability, normative standards, and ability to discriminate impaired and unimpaired performance were investigated. Retest reliability was assessed over three sessions, and a matched-pairs study compared stroke and unimpaired subjects. Both wrists were tested, in counterbalanced order. Patients were tested in hospital-based rehabilitation units. Reliability was investigated on a consecutive sample of 35 adult stroke patients with a range of proprioceptive discrimination abilities and no evidence of neglect. A consecutive sample of 50 stroke patients and convenience sample of 50 healthy volunteers, matched for age, sex, and hand dominance, were tested in the normative-discriminative study. Age and sex were representative of the adult stroke population. The test required matching of imposed wrist positions using a pointer aligned with the axis of movement and a protractor scale. The test was reliable (r = .88 and .92) and observed changes of 8 degrees can be interpreted, with 95% confidence, as genuine. Scores of healthy volunteers ranged from 3.1 degrees to 10.9 degrees average error. The criterion of impairment was conservatively defined as 11 degrees (+/-4.8 degrees) average error. Impaired and unimpaired performance were well differentiated. Clinicians can confidently and quantitatively sample one aspect of proprioceptive sensibility in stroke patients using the wrist position sense test. Development of tests on other joints using the present approach is supported by our findings.

  4. The Test of Masticating and Swallowing Solids (TOMASS): Reliability, Validity and International Normative Data

    ERIC Educational Resources Information Center

    Huckabee, Maggie-Lee; McIntosh, Theresa; Fuller, Laura; Curry, Morgan; Thomas, Paige; Walshe, Margaret; McCague, Ellen; Battel, Irene; Nogueira, Dalia; Frank, Ulrike; van den Engel-Hoek, Lenie; Sella-Weiss, Oshrat

    2018-01-01

    Background: Clinical swallowing assessment is largely limited to qualitative assessment of behavioural observations. There are limited quantitative data that can be compared with a healthy population for identification of impairment. The Test of Masticating and Swallowing Solids (TOMASS) was developed as a quantitative assessment of solid bolus…

  5. A Methodological Self-Study of Quantitizing: Negotiating Meaning and Revealing Multiplicity

    ERIC Educational Resources Information Center

    Seltzer-Kelly, Deborah; Westwood, Sean J.; Pena-Guzman, David M.

    2012-01-01

    This inquiry developed during the process of "quantitizing" qualitative data the authors had gathered for a mixed methods curriculum efficacy study. Rather than providing the intended rigor to their data coding process, their use of an intercoder reliability metric prompted their investigation of the multiplicity and messiness that, as they…

  6. Comprehensive Comparison of Self-Administered Questionnaires for Measuring Quantitative Autistic Traits in Adults

    ERIC Educational Resources Information Center

    Nishiyama, Takeshi; Suzuki, Masako; Adachi, Katsunori; Sumi, Satoshi; Okada, Kensuke; Kishino, Hirohisa; Sakai, Saeko; Kamio, Yoko; Kojima, Masayo; Suzuki, Sadao; Kanne, Stephen M.

    2014-01-01

    We comprehensively compared all available questionnaires for measuring quantitative autistic traits (QATs) in terms of reliability and construct validity in 3,147 non-clinical and 60 clinical subjects with normal intelligence. We examined four full-length forms, the Subthreshold Autism Trait Questionnaire (SATQ), the Broader Autism Phenotype…

  7. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  8. Technical and financial evaluation of assays for progesterone in canine practice in the UK.

    PubMed

    Moxon, R; Copley, D; England, G C W

    2010-10-02

    The concentration of progesterone was measured in 60 plasma samples from bitches at various stages of the oestrous cycle, using commercially available quantitative and semi-quantitative ELISA test kits, as well as by two commercial laboratories undertaking radioimmunoassay (RIA). The RIA, which was assumed to be the 'gold standard' in terms of reliability and accuracy, was the most expensive method when analysing more than one sample per week, and had the longest delay in obtaining results, but had minimal requirements for practice staff time. When compared with the RIA, the quantitative ELISA had a strong positive correlation (r=0.97, P<0.05) and a sensitivity and specificity of 70.6 per cent and 100.0 per cent, respectively, and positive and negative predictive values of 100.0 per cent and 71.0 per cent, respectively, with an overall accuracy of 90.0 per cent. This method was the least expensive when analysing five or more samples per week, but had longer turnaround times than that of the semi-quantitative ELISA and required more staff time. When compared with the RIA, the semi-quantitative ELISA had a sensitivity and specificity of 100.0 per cent and 95.5 per cent, respectively, and positive and negative predictive values of 73.9 per cent and 77.8 per cent, respectively, with an overall accuracy of 89.2 per cent. This method was more expensive than the quantitative ELISA when analysing five or more samples per week, but had the shortest turnaround time and low requirements in terms of staff time.

  9. Clinical use of quantitative cardiac perfusion PET: rationale, modalities and possible indications. Position paper of the Cardiovascular Committee of the European Association of Nuclear Medicine (EANM).

    PubMed

    Sciagrà, Roberto; Passeri, Alessandro; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Gimelli, Alessia; Hyafil, Fabien; Agostini, Denis; Übleis, Christopher; Hacker, Marcus

    2016-07-01

    Until recently, PET was regarded as a luxurious way of performing myocardial perfusion scintigraphy, with excellent image quality and diagnostic capabilities that hardly justified the additional cost and procedural effort. Quantitative perfusion PET was considered a major improvement over standard qualitative imaging, because it allows the measurement of parameters not otherwise available, but for many years its use was confined to academic and research settings. In recent years, however, several factors have contributed to the renewal of interest in quantitative perfusion PET, which has become a much more readily accessible technique due to progress in hardware and the availability of dedicated and user-friendly platforms and programs. In spite of this evolution and of the growing evidence that quantitative perfusion PET can play a role in the clinical setting, there are not yet clear indications for its clinical use. Therefore, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, decided to examine the current literature on quantitative perfusion PET to (1) evaluate the rationale for its clinical use, (2) identify the main methodological requirements, (3) identify the remaining technical difficulties, (4) define the most reliable interpretation criteria, and finally (5) tentatively delineate currently acceptable and possibly appropriate clinical indications. The present position paper must be considered as a starting point aiming to promote a wider use of quantitative perfusion PET and to encourage the conception and execution of the studies needed to definitely establish its role in clinical practice.

  10. Reliability of a visual scoring system with fluorescent tracers to assess dermal pesticide exposure.

    PubMed

    Aragon, Aurora; Blanco, Luis; Lopez, Lylliam; Liden, Carola; Nise, Gun; Wesseling, Catharina

    2004-10-01

    We modified Fenske's semi-quantitative 'visual scoring system' of fluorescent tracer deposited on the skin of pesticide applicators and evaluated its reproducibility in the Nicaraguan setting. The body surface of 33 farmers, divided into 31 segments, was videotaped in the field after spraying with a pesticide solution containing a fluorescent tracer. A portable UV lamp was used for illumination in a foldaway dark room. The videos of five farmers were randomly selected. The scoring was based on a matrix with extension of fluorescent patterns (scale 0-5) on the ordinate and intensity (scale 0-5) on the abscissa, with the product of these two ranks as the final score for each body segment (0-25). Five medical students rated and evaluated the quality of 155 video images having undergone 4 h of training. Cronbach alpha coefficients and two-way random effects intraclass correlation coefficients (ICC) with absolute agreement were computed to assess inter-rater reliability. Consistency was high (Cronbach alpha = 0.96), but the scores differed substantially between raters. The overall ICC was satisfactory [0.75; 95% confidence interval (CI) = 0.62-0.83], but it was lower for intensity (0.54; 95% CI = 0.40-0.66) and higher for extension (0.80; 95% CI = 0.71-0.86). ICCs were lowest for images with low scores and evaluated as low quality, and highest for images with high scores and high quality. Inter-rater reliability coefficients indicate repeatability of the scoring system. However, field conditions for recording fluorescence should be improved to achieve higher quality images, and training should emphasize a better mechanism for the reading of body areas with low contamination.

  11. Investigation of Content and Face Validity and Reliability of Sociocultural Attitude towards Appearance Questionnaire-3 (SATAQ-3) among Female Adolescents

    PubMed Central

    Mousazadeh, Somayeh; Rakhshan, Mahnaz; Mohammadi, Fateme

    2017-01-01

    Objective: This study aimed to determine the psychometric properties of sociocultural attitude towards appearance questionnaire in female adolescents. Method: This was a methodological study. The English version of the questionnaire was translated into Persian, using forward-backward method. Then the face validity, content validity and reliability were checked. To ensure face validity, the questionnaire was given to 25 female adolescents, a psychologist and three nurses, who were required to evaluate the items with respect to problems, ambiguity, relativity, proper terms and grammar, and understandability. For content validity, 15 experts in psychology and nursing, who met the inclusion criteria, were required. They were asked to assess the qualitative of content validity. To determine the quantitative content validity, content validity index and content validity ratio were calculated. At the end, internal consistency of the items was assessed, using Cronbach’s alpha method. Results: According to the expert judgments, content validity ratio was 0.81 and content validity index was 0.91. Besides, the reliability of the questionnaire was confirmed with Cronbach’s alpha = 0.91, and physical and developmental areas showed the highest reliability indices. Conclusion: The aforementioned questionnaire could be used in researches to assess female adolescents’ self-concept. This can be a stepping-stone towards identification of problems and improvement of adolescents’ body image. PMID:28496497

  12. Estimation of reliability of predictions and model applicability domain evaluation in the analysis of acute toxicity (LD50).

    PubMed

    Sazonovas, A; Japertas, P; Didziapetris, R

    2010-01-01

    This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.

  13. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    NASA Astrophysics Data System (ADS)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  14. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    NASA Astrophysics Data System (ADS)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2017-12-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  15. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures.

    PubMed

    Boes, Kelsey S; Roberts, Michael S; Vinueza, Nelson R

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R 2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R 2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. Graphical Abstract ᅟ.

  16. The Impact of Quantitative Data Provided by a Multi-spectral Digital Skin Lesion Analysis Device on Dermatologists'Decisions to Biopsy Pigmented Lesions.

    PubMed

    Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S

    2017-09-01

    BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p <0.0001). Specificity improved from 52 percent to 79 percent ( p <0.0001). The positive predictive value increased from 61 percent to 81 percent ( p <0.01) when the quantitative data were provided. Negative predictive value also increased (68% vs. 91%, p<0.01), and overall biopsy accuracy was greater with multi-spectral digital skin lesion analysis (64% vs. 86%, p <0.001). Interrater reliability improved (intraclass correlation 0.466 before, 0.559 after). CONCLUSION: Incorporating the classifier score and probability data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.

  17. Measurement and Evaluation of Quantitative Performance of PET/CT Images before a Multicenter Clinical Trial.

    PubMed

    Zhu, Yanjia; Geng, Caizheng; Huang, Jia; Liu, Juzhen; Wu, Ning; Xin, Jun; Xu, Hao; Yu, Lijuan; Geng, Jianhua

    2018-06-13

    To ensure the reliability of the planned multi-center clinical trial, we assessed the consistence and comparability of the quantitative parameters of the eight PET/CT units that will be used in this trial. PET/CT images were scanned using a PET NEMA image quality phantom (Biodex) on the eight units of Discovery PET/CT 690 from GE Healthcare. The scanning parameters were the same with the ones to be used in the planned trial. The 18 F-NaF concentration in the background was 5.3 kBq/ml, while the ones in the spheres of diameter 37 mm, 22 mm, 17 mm and 10 mm were 8:1 as to that of the background and the ones in the spheres of diameter 28 mm and 13 mm were 0 kBq/ml. The consistency of hot sphere recovery coefficient (HRC), cold sphere recovery coefficient (CRC), hot sphere contrast (Q H ) and cold sphere contrast (Q c ) among these 8 PET/CTs was analyzed. The variation of the main quantitative parameters of the eight PET/CT systems was within 10%, which is acceptable for the clinical trial.

  18. Quantitative analysis of fatty-acid-based biofuels produced by wild-type and genetically engineered cyanobacteria by gas chromatography-mass spectrometry.

    PubMed

    Guan, Wenna; Zhao, Hui; Lu, Xuefeng; Wang, Cong; Yang, Menglong; Bai, Fali

    2011-11-11

    Simple and rapid quantitative determination of fatty-acid-based biofuels is greatly important for the study of genetic engineering progress for biofuels production by microalgae. Ideal biofuels produced from biological systems should be chemically similar to petroleum, like fatty-acid-based molecules including free fatty acids, fatty acid methyl esters, fatty acid ethyl esters, fatty alcohols and fatty alkanes. This study founded a gas chromatography-mass spectrometry (GC-MS) method for simultaneous quantification of seven free fatty acids, nine fatty acid methyl esters, five fatty acid ethyl esters, five fatty alcohols and three fatty alkanes produced by wild-type Synechocystis PCC 6803 and its genetically engineered strain. Data obtained from GC-MS analyses were quantified using internal standard peak area comparisons. The linearity, limit of detection (LOD) and precision (RSD) of the method were evaluated. The results demonstrated that fatty-acid-based biofuels can be directly determined by GC-MS without derivation. Therefore, rapid and reliable quantitative analysis of fatty-acid-based biofuels produced by wild-type and genetically engineered cyanobacteria can be achieved using the GC-MS method founded in this work. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Comparative evaluation of two Rickettsia typhi-specific quantitative real-time PCRs for research and diagnostic purposes.

    PubMed

    Papp, Stefanie; Rauch, Jessica; Kuehl, Svenja; Richardt, Ulricke; Keller, Christian; Osterloh, Anke

    2017-02-01

    Rickettsioses are caused by intracellular bacteria of the family of Rickettsiaceae. Rickettsia (R.) typhi is the causative agent of endemic typhus. The disease occurs worldwide and is one of the most prevalent rickettsioses. Rickettsial diseases, however, are generally underdiagnosed which is mainly due to the lack of sensitive and specific methods. In addition, methods for quantitative detection of the bacteria for research purposes are rare. We established two qPCRs for the detection of R. typhi by amplification of the outer membrane protein B (ompB) and parvulin-type PPIase (prsA) genes. Both qPCRs are specific and exclusively recognize R. typhi but no other rickettsiae including the closest relative, R. prowazekii. The prsA-based qPCR revealed to be much more sensitive than the amplification of ompB and provided highly reproducible results in the detection of R. typhi in organs of infected mice. Furthermore, as a nested PCR the prsA qPCR was applicable for the detection of R. typhi in human blood samples. Collectively, the prsA-based qPCR represents a reliable method for the quantitative detection of R. typhi for research purposes and is a promising candidate for differential diagnosis.

  20. Use of a Deuterated Internal Standard with Pyrolysis-GC/MS Dimeric Marker Analysis to Quantify Tire Tread Particles in the Environment

    PubMed Central

    Unice, Kenneth M.; Kreider, Marisa L.; Panko, Julie M.

    2012-01-01

    Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories. PMID:23202830

  1. A New Green Method for the Quantitative Analysis of Enrofloxacin by Fourier-Transform Infrared Spectroscopy.

    PubMed

    Rebouças, Camila Tavares; Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes

    2018-05-18

    Background: A green analytical chemistry method was developed for quantification of enrofloxacin in tablets. The drug, a second-generation fluoroquinolone, was first introduced in veterinary medicine for the treatment of various bacterial species. Objective: This study proposed to develop, validate, and apply a reliable, low-cost, fast, and simple IR spectroscopy method for quantitative routine determination of enrofloxacin in tablets. Methods: The method was completely validated according to the International Conference on Harmonisation guidelines, showing accuracy, precision, selectivity, robustness, and linearity. Results: It was linear over the concentration range of 1.0-3.0 mg with correlation coefficients >0.9999 and LOD and LOQ of 0.12 and 0.36 mg, respectively. Conclusions: Now that this IR method has met performance qualifications, it can be adopted and applied for the analysis of enrofloxacin tablets for production process control. The validated method can also be utilized to quantify enrofloxacin in tablets and thus is an environmentally friendly alternative for the routine analysis of enrofloxacin in quality control. Highlights: A new green method for the quantitative analysis of enrofloxacin by Fourier-Transform Infrared spectroscopy was validated. It is a fast, clean and low-cost alternative for the evaluation of enrofloxacin tablets.

  2. A systematic review of quantitative burn wound microbiology in the management of burns patients.

    PubMed

    Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S

    2018-02-01

    The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  3. Observer variation in the assessment of root canal curvature.

    PubMed

    Faraj, S; Boutsioukis, C

    2017-02-01

    To evaluate the inter- and intra-observer agreement between training/trained endodontists regarding the ex vivo classification of root canal curvature into three categories and its measurement using three quantitative methods. Periapical radiographs of seven extracted human posterior teeth with varying degrees of curvature were exposed ex vivo. Twenty training/trained endodontists were asked to classify the root canal curvature into three categories (<10°, 10-30°, >30°), to measure the curvature using three quantitative methods (Schneider, Weine, Pruett) and to draw angles of 10° or 30°, as a control experiment. The procedure was repeated after six weeks. Inter- and intra-observer agreement was evaluated by the intraclass correlation coefficient and weighted kappa. The inter-observer agreement on the visual classification of root canal curvature was substantial (ICC = 0.65, P < 0.018), but a trend towards underestimation of the angle was evident. Participants modified their classifications both within and between the two sessions. Median angles drawn as a control experiment were not significantly different from the target values (P > 0.10), but the results of individual participants varied. When quantitative methods were used, the inter- and intra-observer agreement on the angle measurements was considerably better (ICC = 0.76-0.82, P < 0.001) than on the radius measurements (ICC = 0.16-0.19, P > 0.895). Visual estimation of root canal curvature was not reliable. The use of computer-based quantitative methods is recommended. The measurement of radius of curvature was more subjective than angle measurement. Endodontic Associations need to provide specific guidelines on how to estimate root canal curvature in case difficulty assessment forms. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  4. TEWI Evaluation for Refrigeration and Air-Conditioning Systems in Office Buildings with Different Regional Heat Demand

    NASA Astrophysics Data System (ADS)

    Sobue, Atsushi; Watanabe, Koichi

    In the present study, we quantitatively evaluated the global warming impact by refrigeration and air-conditioning systems in office buildings on the basis of reliable TEWI information. This paper proposes an improved TEWI evaluation procedure by considering regional heat demands and part load of air-conditioning systems. In the TEWI evaluation of commercial chillers, a percentage of the impact by refrigerant released to the atmosphere (direct effect) is less than 19.9% in TEWI values. Therefore, a reduction of the impact by CO2 released as a result of the energy consumed to drive the refrigeration or air-conditioning systems through out their lifetime (indirect effect) is the most effective measure in reducing the global warming impact. On the other hand, we have also pointed out energy loss that might be generated by an excess investment to the equipment. We have also showed a usefulness in dividing the heating / cooling system into several small-capacity units so as to improve the energy utilization efficiency.

  5. Comparison of 13C Nuclear Magnetic Resonance and Fourier Transform Infrared spectroscopy for estimating humification and aromatization of soil organic matter

    NASA Astrophysics Data System (ADS)

    Rogers, K.; Cooper, W. T.; Hodgkins, S. B.; Verbeke, B. A.; Chanton, J.

    2017-12-01

    Solid state direct polarization 13C NMR spectroscopy (DP-NMR) is generally considered the most quantitatively reliable method for soil organic matter (SOM) characterization, including determination of the relative abundances of carbon functional groups. These functional abundances can then be used to calculate important soil parameters such as degree of humification and extent of aromaticity that reveal differences in reactivity or compositional changes along gradients (e.g. thaw chronosequence in permafrost). Unfortunately, the 13C NMR DP-NMR experiment is time-consuming, with a single sample often requiring over 24 hours of instrument time. Alternatively, solid state cross polarization 13C NMR (CP-NMR) can circumvent this problem, reducing analyses times to 4-6 hours but with some loss of quantitative reliability. Attenuated Total Reflectance Fourier Transform Infrared spectroscopy (ATR-FTIR) is a quick and relatively inexpensive method for characterizing solid materials, and has been suggested as an alternative to NMR for analysis of soil organic matter and determination of humification (HI) and aromatization (AI) indices. However, the quantitative reliability of ATR-FTIR for SOM analyses has never been verified, nor have any ATR-FTIR data been compared to similar measurements by NMR. In this work we focused on FTIR vibrational bands that correspond to the three functional groups used to calculate HI and AI values: carbohydrates (1030 cm-1), aromatics (1510, 1630 cm-1), and aliphatics (2850, 2920 cm-1). Data from ATR-FTIR measurements were compared to analogous quantitation by DP- and CP-NMR using peat samples from Sweden, Minnesota, and North Carolina. DP- and CP-NMR correlate very strongly, although the correlations are not always 1:1. Direct comparison of relative abundances of the three functional groups determined by NMR and ATR-FTIR yielded satisfactory results for carbohydrates (r2= 0.78) and aliphatics (r2=0.58), but less so for aromatics (r2= 0.395). ATR-FTIR has to this point been used primarily for relative abundance analyses (e.g. calculating HI and AI values), but these results suggest FTIR can provide quantitative reliability that approaches that of NMR.

  6. Facet Theory and the Mapping Sentence As Hermeneutically Consistent Structured Meta-Ontology and Structured Meta-Mereology

    PubMed Central

    Hackett, Paul M. W.

    2016-01-01

    When behavior is interpreted in a reliable manner (i.e., robustly across different situations and times) its explained meaning may be seen to possess hermeneutic consistency. In this essay I present an evaluation of the hermeneutic consistency that I propose may be present when the research tool known as the mapping sentence is used to create generic structural ontologies. I also claim that theoretical and empirical validity is a likely result of employing the mapping sentence in research design and interpretation. These claims are non-contentious within the realm of quantitative psychological and behavioral research. However, I extend the scope of both facet theory based research and claims for its structural utility, reliability and validity to philosophical and qualitative investigations. I assert that the hermeneutic consistency of a structural ontology is a product of a structural representation's ontological components and the mereological relationships between these ontological sub-units: the mapping sentence seminally allows for the depiction of such structure. PMID:27065932

  7. Quality of Computationally Inferred Gene Ontology Annotations

    PubMed Central

    Škunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe

    2012-01-01

    Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439

  8. Three-Dimensional Photography for Quantitative Assessment of Penile Volume-Loss Deformities in Peyronie's Disease.

    PubMed

    Margolin, Ezra J; Mlynarczyk, Carrie M; Mulhall, John P; Stember, Doron S; Stahl, Peter J

    2017-06-01

    Non-curvature penile deformities are prevalent and bothersome manifestations of Peyronie's disease (PD), but the quantitative metrics that are currently used to describe these deformities are inadequate and non-standardized, presenting a barrier to clinical research and patient care. To introduce erect penile volume (EPV) and percentage of erect penile volume loss (percent EPVL) as novel metrics that provide detailed quantitative information about non-curvature penile deformities and to study the feasibility and reliability of three-dimensional (3D) photography for measurement of quantitative penile parameters. We constructed seven penis models simulating deformities found in PD. The 3D photographs of each model were captured in triplicate by four observers using a 3D camera. Computer software was used to generate automated measurements of EPV, percent EPVL, penile length, minimum circumference, maximum circumference, and angle of curvature. The automated measurements were statistically compared with measurements obtained using water-displacement experiments, a tape measure, and a goniometer. Accuracy of 3D photography for average measurements of all parameters compared with manual measurements; inter-test, intra-observer, and inter-observer reliabilities of EPV and percent EPVL measurements as assessed by the intraclass correlation coefficient. The 3D images were captured in a median of 52 seconds (interquartile range = 45-61). On average, 3D photography was accurate to within 0.3% for measurement of penile length. It overestimated maximum and minimum circumferences by averages of 4.2% and 1.6%, respectively; overestimated EPV by an average of 7.1%; and underestimated percent EPVL by an average of 1.9%. All inter-test, inter-observer, and intra-observer intraclass correlation coefficients for EPV and percent EPVL measurements were greater than 0.75, reflective of excellent methodologic reliability. By providing highly descriptive and reliable measurements of penile parameters, 3D photography can empower researchers to better study volume-loss deformities in PD and enable clinicians to offer improved clinical assessment, communication, and documentation. This is the first study to apply 3D photography to the assessment of PD and to accurately measure the novel parameters of EPV and percent EPVL. This proof-of-concept study is limited by the lack of data in human subjects, which could present additional challenges in obtaining reliable measurements. EPV and percent EPVL are novel metrics that can be quickly, accurately, and reliably measured using computational analysis of 3D photographs and can be useful in describing non-curvature volume-loss deformities resulting from PD. Margolin EJ, Mlynarczyk CM, Muhall JP, et al. Three-Dimensional Photography for Quantitative Assessment of Penile Volume-Loss Deformities in Peyronie's Disease. J Sex Med 2017;14:829-833. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  9. Early Validity and Reliability Data for Two Instruments Assessing the Predispositions People Have toward Technology Use: Continued Integration of Quantitative and Qualitative Methods.

    ERIC Educational Resources Information Center

    Scherer, Marcia J.; McKee, Barbara G.

    Validity and reliability data are presented for two instruments for assessing the predispositions that people have toward the use of assistive and educational technologies. The two instruments, the Assistive Technology Device Predisposition Assessment (ATDPA) and the Educational Technology Predisposition Assessment (ETPA), are self-report…

  10. The Effect of Different Cultural Lenses on Reliability and Validity in Observational Data: The Example of Chinese Immigrant Parent-Toddler Dinner Interactions

    ERIC Educational Resources Information Center

    Wang, Yan Z.; Wiley, Angela R.; Zhou, Xiaobin

    2007-01-01

    This study used a mixed methodology to investigate reliability, validity, and analysis level with Chinese immigrant observational data. European-American and Chinese coders quantitatively rated 755 minutes of Chinese immigrant parent-toddler dinner interactions on parental sensitivity, intrusiveness, detachment, negative affect, positive affect,…

  11. Clinical evaluation of the COBAS Ampliprep/COBAS TaqMan for HCV RNA quantitation in comparison with the branched-DNA assay.

    PubMed

    Pittaluga, Fabrizia; Allice, Tiziano; Abate, Maria Lorena; Ciancio, Alessia; Cerutti, Francesco; Varetto, Silvia; Colucci, Giuseppe; Smedile, Antonina; Ghisetti, Valeria

    2008-02-01

    Diagnosis and monitoring of HCV infection relies on sensitive and accurate HCV RNA detection and quantitation. The performance of the COBAS AmpliPrep/COBAS TaqMan 48 (CAP/CTM) (Roche, Branchburg, NJ), a fully automated, real-time PCR HCV RNA quantitative test was assessed and compared with the branched-DNA (bDNA) assay. Clinical evaluation on 576 specimens obtained from patients with chronic hepatitis C showed a good correlation (r = 0.893) between the two test, but the CAP/CTM scored higher HCV RNA titers than the bDNA across all viral genotypes. The mean bDNA versus CAP/CTM log10 IU/ml differences were -0.49, -0.4, -0.54, -0.26 for genotype 1a, 1b, 2a/2c, 3a, and 4, respectively. These differences reached statistical significance for genotypes 1b, 2a/c, and 3a. The ability of the CAP/CTM to monitor patients undergoing antiviral therapy and correctly identify the weeks 4 and 12 rapid and early virological responses was confirmed. The broader dynamic range of the CAP/CTM compared with the bDNA allowed for a better definition of viral kinetics. In conclusion, the CAP/CTM appears as a reliable and user-friendly assay to monitor HCV viremia during treatment of patients with chronic hepatitis. Its high sensitivity and wide dynamic range may help a better definition of viral load changes during antiviral therapy. (Copyright) 2007 Wiley-Liss, Inc.

  12. Correlation of radiologists' image quality perception with quantitative assessment parameters: just-noticeable difference vs. peak signal-to-noise ratios

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.

    2005-04-01

    The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.

  13. Validating internal controls for quantitative plant gene expression studies.

    PubMed

    Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H

    2004-08-18

    Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments.

  14. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    PubMed

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Low-Frequency Fluctuations of the Resting Brain: High Magnitude Does Not Equal High Reliability

    PubMed Central

    Jia, Wenbin; Liao, Wei; Li, Xun; Huang, Huiyuan; Yuan, Jianhua; Zang, Yu-Feng; Zhang, Han

    2015-01-01

    The amplitude of low-frequency fluctuation (ALFF) measures low-frequency oscillations of the blood-oxygen-level-dependent signal, characterizing local spontaneous activity during the resting state. ALFF is a commonly used measure for resting-state functional magnetic resonance imaging (rs-fMRI) in numerous basic and clinical neuroscience studies. Using a test-retest rs-fMRI dataset consisting of 21 healthy subjects and three repetitive scans, we found that several key brain regions with high ALFF intensities (or magnitude) had poor reliability. Such regions included the posterior cingulate cortex, the medial prefrontal cortex in the default mode network, parts of the right and left thalami, and the primary visual and motor cortices. The above finding was robust with regard to different sample sizes (number of subjects), different scanning parameters (repetition time) and variations of test-retest intervals (i.e., intra-scan, intra-session, and inter-session reliability), as well as with different scanners. Moreover, the qualitative, map-wise results were validated further with a region-of-interest-based quantitative analysis using “canonical” coordinates as reported previously. Therefore, we suggest that the reliability assessments be incorporated in future ALFF studies, especially for the brain regions with a large ALFF magnitude as listed in our paper. Splitting single data into several segments and assessing within-scan “test-retest” reliability is an acceptable alternative if no “real” test-retest datasets are available. Such evaluations might become more necessary if the data are collected with clinical scanners whose performance is not as good as those that are used for scientific research purposes and are better maintained because the lower signal-to-noise ratio may further dampen ALFF reliability. PMID:26053265

  16. Quantification of bone marrow fat content using iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL): reproducibility, site variation and correlation with age and menopause.

    PubMed

    Aoki, Takatoshi; Yamaguchi, Shinpei; Kinoshita, Shunsuke; Hayashida, Yoshiko; Korogi, Yukunori

    2016-09-01

    To determine the reproducibility of the quantitative chemical shift-based water-fat separation method with a multiecho gradient echo sequence [iteraterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence (IDEAL-IQ)] for assessing bone marrow fat fraction (FF); to evaluate variation of FF at different bone sites; and to investigate its association with age and menopause. 31 consecutive females who underwent pelvic iterative decomposition of water and fat with echo asymmetry and least-squares estimation at 3-T MRI were included in this study. Quantitative FF using IDEAL-IQ of four bone sites were analyzed. The coefficients of variance (CV) on each site were evaluated repeatedly 10 times to assess the reproducibility. Correlations between FF and age were evaluated on each site, and the FFs between pre- and post-menopausal groups were compared. The CV in the quantification of marrow FF ranged from 0.69% to 1.70%. A statistically significant correlation was established between the FF and the age in lumbar vertebral body, ilium and intertrochanteric region of the femur (p < 0.001). The average FF of post-menopausal females was significantly higher than that of pre-menopausal females in these sites (p < 0.05). In the greater trochanter of the femur, there was no significant correlation between FF and age. In vivo IDEAL-IQ would provide reliable quantification of bone marrow fat. IDEAL-IQ is simple to perform in a short time and may be practical for providing information on bone quality in clinical settings.

  17. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    PubMed

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.

  18. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  19. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    PubMed Central

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  20. Forces on intraocular lens haptics induced by capsular fibrosis. An experimental study.

    PubMed

    Guthoff, R; Abramo, F; Draeger, J; Chumbley, L C; Lang, G K; Neumann, W

    1990-01-01

    Electronic dynamometry measurements, performed upon intraocular lens (IOL) haptics of prototype one-piece three-loop silicone lenses, accurately defined the relationships between elastic force and haptic displacement. Lens implantations in the capsular bag of dogs (loop span equal to capsular bag diameter, loops underformed immediately after the operation) were evaluated macrophotographically 5-8 months postoperatively. The highly constant elastic property of silicon rubber permitted quantitative correlation of subsequent in vivo haptic displacement with the resultant force vectors responsible for tissue contraction. The lens optics were well centered in 17 (85%) and slightly offcenter in 3 (15%) of 20 implanted eyes. Of the 60 supporting loops, 28 could be visualized sufficiently well to permit reliable haptic measurement. Of these 28, 20 (71%) were clearly displaced, ranging from 0.45 mm away from to 1.4 mm towards the lens' optic center. These extremes represented resultant vector forces of 0.20 and 1.23 mN respectively. Quantitative vector analysis permits better understanding of IOL-capsular interactions.

  1. Ventana immunohistochemistry ALK (D5F3) detection of ALK expression in pleural effusion samples of lung adenocarcinoma.

    PubMed

    Wang, Zheng; Wu, Xiaonan; Shi, Yuankai; Han, Xiaohong; Cheng, Gang; Cui, Di; Li, Lin; Zhang, Yuhui; Mu, Xinlin; Zhang, Li; Yang, Li; Di, Jing; Yu, Qi; Liu, Dongge

    2015-08-01

    To evaluate the Ventana IHC ALK (D5F3) assay for detecting anaplastic lymphoma kinase (ALK) protein expression in pleural effusion samples. Historical, selected (wild-type EGFR, K-RAS) pleural effusion cytologic blocks of lung adenocarcinoma samples (Study 1) and unselected lung adenocarcinoma pleural effusion cytologic blocks (Study 2) were tested by Ventana IHC ALK (D5F3) assay. Quantitative real-time-PCR was used to verify immunohistochemistry results. A total of 17 out of 100 (Study 1) and ten out of 104 (Study 2) pleural effusion samples were ALK expression positive by the Ventana IHC ALK (D5F3) assay. The ALK fusion results with immunohistochemistry and quantitative real-time-PCR had a concordance rate of 87.5% (κ = 0.886; p < 0.001). The Ventana IHC ALK (D5F3) assay is a reliable tool for detecting ALK protein expression in pleural effusion samples.

  2. Biomarkers are used to predict quantitative metabolite concentration profiles in human red blood cells

    DOE PAGES

    Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.; ...

    2017-03-06

    Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less

  3. Biomarkers are used to predict quantitative metabolite concentration profiles in human red blood cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.

    Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less

  4. Using a Smart Phone as a Standalone Platform for Detection and Monitoring of Pathological Tremors

    PubMed Central

    Daneault, Jean-François; Carignan, Benoit; Codère, Carl Éric; Sadikot, Abbas F.; Duval, Christian

    2013-01-01

    Introduction: Smart phones are becoming ubiquitous and their computing capabilities are ever increasing. Consequently, more attention is geared toward their potential use in research and medical settings. For instance, their built-in hardware can provide quantitative data for different movements. Therefore, the goal of the current study was to evaluate the capabilities of a standalone smart phone platform to characterize tremor. Results: Algorithms for tremor recording and online analysis can be implemented within a smart phone. The smart phone provides reliable time- and frequency-domain tremor characteristics. The smart phone can also provide medically relevant tremor assessments. Discussion: Smart phones have the potential to provide researchers and clinicians with quantitative short- and long-term tremor assessments that are currently not easily available. Methods: A smart phone application for tremor quantification and online analysis was developed. Then, smart phone results were compared to those obtained simultaneously with a laboratory accelerometer. Finally, results from the smart phone were compared to clinical tremor assessments. PMID:23346053

  5. Microscopic quantification of bacterial invasion by a novel antibody-independent staining method.

    PubMed

    Agerer, Franziska; Waeckerle, Stephanie; Hauck, Christof R

    2004-10-01

    Microscopic discrimination between extracellular and invasive, intracellular bacteria is a valuable technique in microbiology and immunology. We describe a novel fluorescence staining protocol, called FITC-biotin-avidin (FBA) staining, which allows the differentiation between extracellular and intracellular bacteria and is independent of specific antibodies directed against the microorganisms. FBA staining of eukaryotic cells infected with Gram-negative bacteria of the genus Neisseria or the Gram-positive pathogen Staphylococcus aureus are employed to validate the novel technique. The quantitative evaluation of intracellular pathogens by the FBA staining protocol yields identical results compared to parallel samples stained with conventional, antibody-dependent methods. FBA staining eliminates the need for cell permeabilization resulting in robust and rapid detection of invasive microbes. Taken together, FBA staining provides a reliable and convenient alternative for the differential detection of intracellular and extracellular bacteria and should be a valuable technical tool for the quantitative analysis of the invasive properties of pathogenic bacteria and other microorganisms.

  6. Progress in quantitative GPR development at CNDE

    NASA Astrophysics Data System (ADS)

    Eisenmann, David; Margetan, F. J.; Chiou, C.-P.; Roberts, Ron; Wendt, Scott

    2014-02-01

    Ground penetrating radar (GPR) uses electromagnetic (EM) radiation pulses to locate and map embedded objects. Commercial GPR instruments are generally geared toward producing images showing the location and extent of buried objects, and often do not make full use of available absolute amplitude information. At the Center for Nondestructive Evaluation (CNDE) at Iowa State University efforts are underway to develop a more quantitative approach to GPR inspections in which absolute amplitudes and spectra of measured signals play a key role. Guided by analogous work in ultrasonic inspection, there are three main thrusts to the effort. These focus, respectively, on the development of tools for: (1) analyzing raw GPR data; (2) measuring the EM properties of soils and other embedding media; and (3) simulating GPR inspections. This paper reviews progress in each category. The ultimate goal of the work is to develop model-based simulation tools that can be used assess the usefulness of GPR for a given inspection scenario, to optimize inspection choices, and to determine inspection reliability.

  7. Screening of groundwater remedial alternatives for brownfield sites: a comprehensive method integrated MCDA with numerical simulation.

    PubMed

    Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu

    2018-06-01

    Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.

  8. Impact of HIV type 1 subtype variation on viral RNA quantitation.

    PubMed

    Parekh, B; Phillips, S; Granade, T C; Baggs, J; Hu, D J; Respess, R

    1999-01-20

    We evaluated the performance of three HIV-1 RNA quantitation methods (Amplicor HIV-1 MONITOR-1.0, NASBA, and Quantiplex HIV RNA 2.0 [branched DNA (bDNA)]) using plasma specimens (N = 60) from individuals from Asia and Africa infected with one of three HIV-1 subtypes (A, Thai B [B'] or E; N = 20 each). Our results demonstrate that of the 20 subtype A specimens, 19 were quantifiable by the bDNA assay compared with 15 by the MONITOR-1.0 and 13 by NASBA. Of those quantifiable, the mean log10 difference was 0.93 between bDNA and MONITOR-1.0 and 0.46 between bDNA and NASBA. For subtype B' specimens, the correlation among methods was better with only 2 specimens missed by NASBA and 3 by the bDNA assay. However the missed specimens had viral burden near the lower limit (1000 copies/ml) for these assays. For the 20 subtype E specimens, MONITOR-1.0 and NASBA quantified RNA in 17 and 14 specimens, respectively, as compared with 19 specimens quantified by the bDNA assay. The correlation among different assays, especially between bDNA/NASBA and MONITOR-1.0/NASBA, was poor, although the mean log10 difference for subtype E specimens was 0.4 between bDNA and MONITOR-1.0 and only 0.08 between bDNA and NASBA. The addition of a new primer set, designed for non-B HIV-1 subtypes, to the existing MONITOR assay (MONITOR-1.0+) resulted in RNA detection in all 60 specimens and significantly improved the efficiency of quantitation for subtypes A and E. Our data indicate that HIV-1 subtype variation can have a major influence on viral load quantitation by different methods. Periodic evaluation and modification of these quantitative methods may be necessary to ensure reliable quantification of divergent viruses.

  9. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    PubMed

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  10. Reporting and Interpreting Quantitative Research Findings: What Gets Reported and Recommendations for the Field

    ERIC Educational Resources Information Center

    Larson-Hall, Jenifer; Plonsky, Luke

    2015-01-01

    This paper presents a set of guidelines for reporting on five types of quantitative data issues: (1) Descriptive statistics, (2) Effect sizes and confidence intervals, (3) Instrument reliability, (4) Visual displays of data, and (5) Raw data. Our recommendations are derived mainly from various professional sources related to L2 research but…

  11. Quantitative Assessment of Motor and Sensory/Motor Acquisition in Handicapped and Nonhandicapped Infants and Young Children. Volume III: Replication of the Procedures.

    ERIC Educational Resources Information Center

    Guess, Doug; And Others

    Ten replication studies based on quantitative procedures developed to measure motor and sensory/motor skill acquisition among handicapped and nonhandicapped infants and children are presented. Each study follows the original assessment procedures, and emphasizes the stability of interobserver reliability across time, consistency in the response…

  12. Students' Self-Reflections on Their Personality Scores Applied to the Processes of Learning and Achievement

    ERIC Educational Resources Information Center

    Mcilroy, David; Todd, Valerie; Palmer-Conn, Sue; Poole, Karen

    2016-01-01

    Research on personality in the educational context has primarily focused on quantitative approaches, so this study used a mixed methods approach to capture the boarder aspects of students' learning processes. Goals were to ensure that student responses were reliable and normal (quantitative data), and to examine qualitative reflections on…

  13. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  14. A Quantitative Socio-hydrological Characterization of Water Security in Large-Scale Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Siddiqi, A.; Muhammad, A.; Wescoat, J. L., Jr.

    2017-12-01

    Large-scale, legacy canal systems, such as the irrigation infrastructure in the Indus Basin in Punjab, Pakistan, have been primarily conceived, constructed, and operated with a techno-centric approach. The emerging socio-hydrological approaches provide a new lens for studying such systems to potentially identify fresh insights for addressing contemporary challenges of water security. In this work, using the partial definition of water security as "the reliable availability of an acceptable quantity and quality of water", supply reliability is construed as a partial measure of water security in irrigation systems. A set of metrics are used to quantitatively study reliability of surface supply in the canal systems of Punjab, Pakistan using an extensive dataset of 10-daily surface water deliveries over a decade (2007-2016) and of high frequency (10-minute) flow measurements over one year. The reliability quantification is based on comparison of actual deliveries and entitlements, which are a combination of hydrological and social constructs. The socio-hydrological lens highlights critical issues of how flows are measured, monitored, perceived, and experienced from the perspective of operators (government officials) and users (famers). The analysis reveals varying levels of reliability (and by extension security) of supply when data is examined across multiple temporal and spatial scales. The results shed new light on evolution of water security (as partially measured by supply reliability) for surface irrigation in the Punjab province of Pakistan and demonstrate that "information security" (defined as reliable availability of sufficiently detailed data) is vital for enabling water security. It is found that forecasting and management (that are social processes) lead to differences between entitlements and actual deliveries, and there is significant potential to positively affect supply reliability through interventions in the social realm.

  15. The long-term reliability of static and dynamic quantitative sensory testing in healthy individuals.

    PubMed

    Marcuzzi, Anna; Wrigley, Paul J; Dean, Catherine M; Adams, Roger; Hush, Julia M

    2017-07-01

    Quantitative sensory tests (QSTs) have been increasingly used to investigate alterations in somatosensory function in a wide range of painful conditions. The interpretation of these findings is based on the assumption that the measures are stable and reproducible. To date, reliability of QST has been investigated for short test-retest intervals. The aim of this study was to investigate the long-term reliability of a multimodal QST assessment in healthy people, with testing conducted on 3 occasions over 4 months. Forty-two healthy people were enrolled in the study. Static and dynamic tests were performed, including cold and heat pain threshold (CPT, HPT), mechanical wind-up [wind-up ratio (WUR)], pressure pain threshold (PPT), 2-point discrimination (TPD), and conditioned pain modulation (CPM). Systematic bias, relative reliability and agreement were analysed using repeated measure analysis of variance, intraclass correlation coefficients (ICCs3,1) and SE of the measurement (SEM), respectively. Static QST (CPT, HPT, PPT, and TPD) showed good-to-excellent reliability (ICCs: 0.68-0.90). Dynamic QST (WUR and CPM) showed poor-to-good reliability (ICCs: 0.35-0.61). A significant linear decrease over time was observed for mechanical QST at the back (PPT and TPD) and for CPM (P < 0.01). Static QST were stable over a period of 4 months; however, a small systematic decrease over time has been observed for mechanical QST. Dynamic QST showed considerable variability over time; in particular, CPM using PPT as the test stimulus did not show adequate reliability, suggesting that this test paradigm may be less useful for monitoring individuals over time.

  16. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  17. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  18. Real-time PCR assays for the quantitation of rDNA from apricot and other plant species in marzipan.

    PubMed

    Haase, Ilka; Brüning, Philipp; Matissek, Reinhard; Fischer, Markus

    2013-04-10

    Marzipan or marzipan raw paste is a typical German sweet which is consumed directly or is used as an ingredient in the bakery industry/confectionery (e.g., in stollen) and as filling for chocolate candies. Almonds (blanched and pealed) and sugar are the only ingredients for marzipan production according to German food guidelines. Especially for the confectionery industry, the use of persipan, which contains apricot or peach kernels instead of almonds, is preferred due to its stronger aroma. In most of the companies, both raw pastes are produced, in most cases on the same production line, running the risk of an unintended cross contamination. Additionally, due to high almond market values, dilutions of marzipan with cheaper seeds may occur. Especially in the case of apricot and almond, the close relationship of both species is a challenge for the analysis. DNA based methods for the qualitative detection of apricot, peach, pea, bean, lupine, soy, cashew, pistachio, and chickpea in marzipan have recently been published. In this study, different quantitation strategies on the basis of real-time PCR have been evaluated and a relative quantitation method with a reference amplification product was shown to give the best results. As the real-time PCR is based on the high copy rDNA-cluster, even contaminations <1% can be reliably quantitated.

  19. On the construction of a ground truth framework for evaluating voxel-based diffusion tensor MRI analysis methods.

    PubMed

    Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander

    2009-07-01

    Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).

  20. Checklist to operationalize measurement characteristics of patient-reported outcome measures.

    PubMed

    Francis, David O; McPheeters, Melissa L; Noud, Meaghan; Penson, David F; Feurer, Irene D

    2016-08-02

    The purpose of this study was to advance a checklist of evaluative criteria designed to assess patient-reported outcome (PRO) measures' developmental measurement properties and applicability, which can be used by systematic reviewers, researchers, and clinicians with a varied range of expertise in psychometric measure development methodology. A directed literature search was performed to identify original studies, textbooks, consensus guidelines, and published reports that propose criteria for assessing the quality of PRO measures. Recommendations from these sources were iteratively distilled into a checklist of key attributes. Preliminary items underwent evaluation through 24 cognitive interviews with clinicians and quantitative researchers. Six measurement theory methodological novices independently applied the final checklist to assess six PRO measures encompassing a variety of methods, applications, and clinical constructs. Agreement between novice and expert scores was assessed. The distillation process yielded an 18-item checklist with six domains: (1) conceptual model, (2) content validity, (3) reliability, (4) construct validity, (5) scoring and interpretation, and (6) respondent burden and presentation. With minimal instruction, good agreement in checklist item ratings was achieved between quantitative researchers with expertise in measurement theory and less experienced clinicians (mean kappa 0.70; range 0.66-0.87). We present a simplified checklist that can help guide systematic reviewers, researchers, and clinicians with varied measurement theory expertise to evaluate the strengths and weakness of candidate PRO measures' developmental properties and the appropriateness for specific applications.

  1. Less label, more free: approaches in label-free quantitative mass spectrometry.

    PubMed

    Neilson, Karlie A; Ali, Naveid A; Muralidharan, Sridevi; Mirzaei, Mehdi; Mariani, Michael; Assadourian, Gariné; Lee, Albert; van Sluyter, Steven C; Haynes, Paul A

    2011-02-01

    In this review we examine techniques, software, and statistical analyses used in label-free quantitative proteomics studies for area under the curve and spectral counting approaches. Recent advances in the field are discussed in an order that reflects a logical workflow design. Examples of studies that follow this design are presented to highlight the requirement for statistical assessment and further experiments to validate results from label-free quantitation. Limitations of label-free approaches are considered, label-free approaches are compared with labelling techniques, and forward-looking applications for label-free quantitative data are presented. We conclude that label-free quantitative proteomics is a reliable, versatile, and cost-effective alternative to labelled quantitation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  3. Digital tooth-based superimposition method for assessment of alveolar bone levels on cone-beam computed tomography images.

    PubMed

    Romero-Delmastro, Alejandro; Kadioglu, Onur; Currier, G Frans; Cook, Tanner

    2014-08-01

    Cone-beam computed tomography images have been previously used for evaluation of alveolar bone levels around teeth before, during, and after orthodontic treatment. Protocols described in the literature have been vague, have used unstable landmarks, or have required several software programs, file conversions, or hand tracings, among other factors that could compromise the precision of the measurements. The purposes of this article are to describe a totally digital tooth-based superimposition method for the quantitative assessment of alveolar bone levels and to evaluate its reliability. Ultra cone-beam computed tomography images (0.1-mm reconstruction) from 10 subjects were obtained from the data pool of the University of Oklahoma; 80 premolars were measured twice by the same examiner and a third time by a second examiner to determine alveolar bone heights and thicknesses before and more than 6 months after orthodontic treatment using OsiriX (version 3.5.1; Pixeo, Geneva, Switzerland). Intraexaminer and interexaminer reliabilities were evaluated, and Dahlberg's formula was used to calculate the error of the measurements. Cross-sectional and longitudinal evaluations of alveolar bone levels were possible using a digital tooth-based superimposition method. The mean differences for buccal alveolar crest heights and thicknesses were below 0.10 mm for the same examiner and below 0.17 mm for all examiners. The ranges of errors for any measurement were between 0.02 and 0.23 mm for intraexaminer errors, and between 0.06 and 0.29 mm for interexaminer errors. This protocol can be used for cross-sectional or longitudinal assessment of alveolar bone levels with low interexaminer and intraexaminer errors, and it eliminates the use of less reliable or less stable landmarks and the need for multiple software programs and image printouts. Standardization of the methods for bone assessment in orthodontics is necessary; this method could be the answer to this need. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  4. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Treesearch

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  5. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  6. Comparison of Two Commercial Automated Nucleic Acid Extraction and Integrated Quantitation Real-Time PCR Platforms for the Detection of Cytomegalovirus in Plasma

    PubMed Central

    Tsai, Huey-Pin; Tsai, You-Yuan; Lin, I-Ting; Kuo, Pin-Hwa; Chen, Tsai-Yun; Chang, Kung-Chao; Wang, Jen-Ren

    2016-01-01

    Quantitation of cytomegalovirus (CMV) viral load in the transplant patients has become a standard practice for monitoring the response to antiviral therapy. The cut-off values of CMV viral load assays for preemptive therapy are different due to the various assay designs employed. To establish a sensitive and reliable diagnostic assay for preemptive therapy of CMV infection, two commercial automated platforms including m2000sp extraction system integrated the Abbott RealTime (m2000rt) and the Roche COBAS AmpliPrep for extraction integrated COBAS Taqman (CAP/CTM) were evaluated using WHO international CMV standards and 110 plasma specimens from transplant patients. The performance characteristics, correlation, and workflow of the two platforms were investigated. The Abbott RealTime assay correlated well with the Roche CAP/CTM assay (R2 = 0.9379, P<0.01). The Abbott RealTime assay exhibited higher sensitivity for the detection of CMV viral load, and viral load values measured with Abbott RealTime assay were on average 0.76 log10 IU/mL higher than those measured with the Roche CAP/CTM assay (P<0.0001). Workflow analysis on a small batch size at one time, using the Roche CAP/CTM platform had a shorter hands-on time than the Abbott RealTime platform. In conclusion, these two assays can provide reliable data for different purpose in a clinical virology laboratory setting. PMID:27494707

  7. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  8. Genome-Wide Identification and Evaluation of Reference Genes for Quantitative RT-PCR Analysis during Tomato Fruit Development.

    PubMed

    Cheng, Yuan; Bian, Wuying; Pang, Xin; Yu, Jiahong; Ahammed, Golam J; Zhou, Guozhi; Wang, Rongqing; Ruan, Meiying; Li, Zhimiao; Ye, Qingjing; Yao, Zhuping; Yang, Yuejian; Wan, Hongjian

    2017-01-01

    Gene expression analysis in tomato fruit has drawn increasing attention nowadays. Quantitative real-time PCR (qPCR) is a routine technique for gene expression analysis. In qPCR operation, reliability of results largely depends on the choice of appropriate reference genes (RGs). Although tomato is a model for fruit biology study, few RGs for qPCR analysis in tomato fruit had yet been developed. In this study, we initially identified 38 most stably expressed genes based on tomato transcriptome data set, and their expression stabilities were further determined in a set of tomato fruit samples of four different fruit developmental stages (Immature, mature green, breaker, mature red) using qPCR analysis. Two statistical algorithms, geNorm and Normfinder, concordantly determined the superiority of these identified putative RGs. Notably, SlFRG05 (Solyc01g104170), SlFRG12 (Solyc04g009770), SlFRG16 (Solyc10g081190), SlFRG27 (Solyc06g007510), and SlFRG37 (Solyc11g005330) were proved to be suitable RGs for tomato fruit development study. Further analysis using geNorm indicate that the combined use of SlFRG03 (Solyc02g063070) and SlFRG27 would provide more reliable normalization results in qPCR experiments. The identified RGs in this study will be beneficial for future qPCR analysis of tomato fruit developmental study, as well as for the potential identification of optimal normalization controls in other plant species.

  9. Genome-Wide Identification and Evaluation of Reference Genes for Quantitative RT-PCR Analysis during Tomato Fruit Development

    PubMed Central

    Cheng, Yuan; Bian, Wuying; Pang, Xin; Yu, Jiahong; Ahammed, Golam J.; Zhou, Guozhi; Wang, Rongqing; Ruan, Meiying; Li, Zhimiao; Ye, Qingjing; Yao, Zhuping; Yang, Yuejian; Wan, Hongjian

    2017-01-01

    Gene expression analysis in tomato fruit has drawn increasing attention nowadays. Quantitative real-time PCR (qPCR) is a routine technique for gene expression analysis. In qPCR operation, reliability of results largely depends on the choice of appropriate reference genes (RGs). Although tomato is a model for fruit biology study, few RGs for qPCR analysis in tomato fruit had yet been developed. In this study, we initially identified 38 most stably expressed genes based on tomato transcriptome data set, and their expression stabilities were further determined in a set of tomato fruit samples of four different fruit developmental stages (Immature, mature green, breaker, mature red) using qPCR analysis. Two statistical algorithms, geNorm and Normfinder, concordantly determined the superiority of these identified putative RGs. Notably, SlFRG05 (Solyc01g104170), SlFRG12 (Solyc04g009770), SlFRG16 (Solyc10g081190), SlFRG27 (Solyc06g007510), and SlFRG37 (Solyc11g005330) were proved to be suitable RGs for tomato fruit development study. Further analysis using geNorm indicate that the combined use of SlFRG03 (Solyc02g063070) and SlFRG27 would provide more reliable normalization results in qPCR experiments. The identified RGs in this study will be beneficial for future qPCR analysis of tomato fruit developmental study, as well as for the potential identification of optimal normalization controls in other plant species. PMID:28900431

  10. Quantitative ultrasound method for assessing stress-strain properties and the cross-sectional area of Achilles tendon

    NASA Astrophysics Data System (ADS)

    Du, Yi-Chun; Chen, Yung-Fu; Li, Chien-Ming; Lin, Chia-Hung; Yang, Chia-En; Wu, Jian-Xing; Chen, Tainsong

    2013-12-01

    The Achilles tendon is one of the most commonly observed tendons injured with a variety of causes, such as trauma, overuse and degeneration, in the human body. Rupture and tendinosis are relatively common for this strong tendon. Stress-strain properties and shape change are important biomechanical properties of the tendon to assess surgical repair or healing progress. Currently, there are rather limited non-invasive methods available for precisely quantifying the in vivo biomechanical properties of the tendons. The aim of this study was to apply quantitative ultrasound (QUS) methods, including ultrasonic attenuation and speed of sound (SOS), to investigate porcine tendons in different stress-strain conditions. In order to find a reliable method to evaluate the change of tendon shape, ultrasound measurement was also utilized for measuring tendon thickness and compared with the change in tendon cross-sectional area under different stress. A total of 15 porcine tendons of hind trotters were examined. The test results show that the attenuation and broadband ultrasound attenuation decreased and the SOS increased by a smaller magnitude as the uniaxial loading of the stress-strain upon tendons increased. Furthermore, the tendon thickness measured with the ultrasound method was significantly correlated with tendon cross-sectional area (Pearson coefficient = 0.86). These results also indicate that attenuation of QUS and ultrasonic thickness measurement are reliable and potential parameters for assessing biomechanical properties of tendons. Further investigations are needed to warrant the application of the proposed method in a clinical setting.

  11. Use magnetic resonance imaging to assess articular cartilage

    PubMed Central

    Wang, Yuanyuan; Wluka, Anita E.; Jones, Graeme; Ding, Changhai

    2012-01-01

    Magnetic resonance imaging (MRI) enables a noninvasive, three-dimensional assessment of the entire joint, simultaneously allowing the direct visualization of articular cartilage. Thus, MRI has become the imaging modality of choice in both clinical and research settings of musculoskeletal diseases, particular for osteoarthritis (OA). Although radiography, the current gold standard for the assessment of OA, has had recent significant technical advances, radiographic methods have significant limitations when used to measure disease progression. MRI allows accurate and reliable assessment of articular cartilage which is sensitive to change, providing the opportunity to better examine and understand preclinical and very subtle early abnormalities in articular cartilage, prior to the onset of radiographic disease. MRI enables quantitative (cartilage volume and thickness) and semiquantitative assessment of articular cartilage morphology, and quantitative assessment of cartilage matrix composition. Cartilage volume and defects have demonstrated adequate validity, accuracy, reliability and sensitivity to change. They are correlated to radiographic changes and clinical outcomes such as pain and joint replacement. Measures of cartilage matrix composition show promise as they seem to relate to cartilage morphology and symptoms. MRI-derived cartilage measurements provide a useful tool for exploring the effect of modifiable factors on articular cartilage prior to clinical disease and identifying the potential preventive strategies. MRI represents a useful approach to monitoring the natural history of OA and evaluating the effect of therapeutic agents. MRI assessment of articular cartilage has tremendous potential for large-scale epidemiological studies of OA progression, and for clinical trials of treatment response to disease-modifying OA drugs. PMID:22870497

  12. Selection of reliable reference genes for quantitative real-time PCR gene expression analysis in Jute (Corchorus capsularis) under stress treatments

    PubMed Central

    Niu, Xiaoping; Qi, Jianmin; Zhang, Gaoyang; Xu, Jiantang; Tao, Aifen; Fang, Pingping; Su, Jianguang

    2015-01-01

    To accurately measure gene expression using quantitative reverse transcription PCR (qRT-PCR), reliable reference gene(s) are required for data normalization. Corchorus capsularis, an annual herbaceous fiber crop with predominant biodegradability and renewability, has not been investigated for the stability of reference genes with qRT-PCR. In this study, 11 candidate reference genes were selected and their expression levels were assessed using qRT-PCR. To account for the influence of experimental approach and tissue type, 22 different jute samples were selected from abiotic and biotic stress conditions as well as three different tissue types. The stability of the candidate reference genes was evaluated using geNorm, NormFinder, and BestKeeper programs, and the comprehensive rankings of gene stability were generated by aggregate analysis. For the biotic stress and NaCl stress subsets, ACT7 and RAN were suitable as stable reference genes for gene expression normalization. For the PEG stress subset, UBC, and DnaJ were sufficient for accurate normalization. For the tissues subset, four reference genes TUBβ, UBI, EF1α, and RAN were sufficient for accurate normalization. The selected genes were further validated by comparing expression profiles of WRKY15 in various samples, and two stable reference genes were recommended for accurate normalization of qRT-PCR data. Our results provide researchers with appropriate reference genes for qRT-PCR in C. capsularis, and will facilitate gene expression study under these conditions. PMID:26528312

  13. Automated Quantitative Analysis of Retinal Microvasculature in Normal Eyes on Optical Coherence Tomography Angiography.

    PubMed

    Lupidi, Marco; Coscas, Florence; Cagini, Carlo; Fiore, Tito; Spaccini, Elisa; Fruttini, Daniela; Coscas, Gabriel

    2016-09-01

    To describe a new automated quantitative technique for displaying and analyzing macular vascular perfusion using optical coherence tomography angiography (OCT-A) and to determine a normative data set, which might be used as reference in identifying progressive changes due to different retinal vascular diseases. Reliability study. A retrospective review of 47 eyes of 47 consecutive healthy subjects imaged with a spectral-domain OCT-A device was performed in a single institution. Full-spectrum amplitude-decorrelation angiography generated OCT angiograms of the retinal superficial and deep capillary plexuses. A fully automated custom-built software was used to provide quantitative data on the foveal avascular zone (FAZ) features and the total vascular and avascular surfaces. A comparative analysis between central macular thickness (and volume) and FAZ metrics was performed. Repeatability and reproducibility were also assessed in order to establish the feasibility and reliability of the method. The comparative analysis between the superficial capillary plexus and the deep capillary plexus revealed a statistically significant difference (P < .05) in terms of FAZ perimeter, surface, and major axis and a not statistically significant difference (P > .05) when considering total vascular and avascular surfaces. A linear correlation was demonstrated between central macular thickness (and volume) and the FAZ surface. Coefficients of repeatability and reproducibility were less than 0.4, thus demonstrating high intraobserver repeatability and interobserver reproducibility for all the examined data. A quantitative approach on retinal vascular perfusion, which is visible on Spectralis OCT angiography, may offer an objective and reliable method for monitoring disease progression in several retinal vascular diseases. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Methods for Quantitative Creatinine Determination.

    PubMed

    Moore, John F; Sharer, J Daniel

    2017-04-06

    Reliable measurement of creatinine is necessary to assess kidney function, and also to quantitate drug levels and diagnostic compounds in urine samples. The most commonly used methods are based on the Jaffe principal of alkaline creatinine-picric acid complex color formation. However, other compounds commonly found in serum and urine may interfere with Jaffe creatinine measurements. Therefore, many laboratories have made modifications to the basic method to remove or account for these interfering substances. This appendix will summarize the basic Jaffe method, as well as a modified, automated version. Also described is a high performance liquid chromatography (HPLC) method that separates creatinine from contaminants prior to direct quantification by UV absorption. Lastly, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method is described that uses stable isotope dilution to reliably quantify creatinine in any sample. This last approach has been recommended by experts in the field as a means to standardize all quantitative creatinine methods against an accepted reference. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  15. [Development and application of morphological analysis method in Aspergillus niger fermentation].

    PubMed

    Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang

    2015-02-01

    Filamentous fungi are widely used in industrial fermentation. Particular fungal morphology acts as a critical index for a successful fermentation. To break the bottleneck of morphological analysis, we have developed a reliable method for fungal morphological analysis. By this method, we can prepare hundreds of pellet samples simultaneously and obtain quantitative morphological information at large scale quickly. This method can largely increase the accuracy and reliability of morphological analysis result. Based on that, the studies of Aspergillus niger morphology under different oxygen supply conditions and shear rate conditions were carried out. As a result, the morphological responding patterns of A. niger morphology to these conditions were quantitatively demonstrated, which laid a solid foundation for the further scale-up.

  16. Insights from Industry: A Quantitative Analysis of Engineers' Perceptions of Empathy and Care within Their Practice

    ERIC Educational Resources Information Center

    Hess, Justin L.; Strobel, Johannes; Pan, Rui; Wachter Morris, Carrie A.

    2017-01-01

    This study focuses on two seldom-investigated skills or dispositions aligned with engineering habits of mind--empathy and care. In order to conduct quantitative research, we designed, explored the underlying structure of, validated, and tested the reliability of the Empathy and Care Questionnaire (ECQ), a new psychometric instrument. In the second…

  17. Reviewing effectiveness of ankle assessment techniques for use in robot-assisted therapy.

    PubMed

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Shane

    2014-01-01

    This article provides a comprehensive review of studies that investigated ankle assessment techniques to better understand those that can be used in the real-time monitoring of rehabilitation progress for implementation in conjunction with robot-assisted therapy. Seventy-six publications published between January 1980 and August 2013 were selected based on eight databases. They were divided into two main categories (16 qualitative and 60 quantitative studies): 13 goniometer studies, 18 dynamometer studies, and 29 studies about innovative techniques. A total of 465 subjects participated in the 29 quantitative studies of innovative measurement techniques that may potentially be integrated in a real-time monitoring device, of which 19 studies included less than 10 participants. Results show that qualitative ankle assessment methods are not suitable for real-time monitoring in robot-assisted therapy, though they are reliable for certain patients, while the quantitative methods show great potential. The majority of quantitative techniques are reliable in measuring ankle kinematics and kinetics but are usually available only for use in the sagittal plane. Limited studies determine kinematics and kinetics in all three planes (sagittal, transverse, and frontal) where motions of the ankle joint and the subtalar joint actually occur.

  18. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    PubMed

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  19. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy

    PubMed Central

    Daskalakis, Constantine

    2015-01-01

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744

  20. Eliciting management action: Using THERP to highlight human factors deficiencies for trip reduction programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuld, R.; Cybert, S.

    Methods and criteria for performing human factors evaluations of plant systems and procedures are well developed and available. For a design review to produce a positive impact on operations, however, it is not enough to simply document deficiences and solutions. The results must be presented to management in a clear and compelling form that will direct attention to the heart of a problem and present proposed solutions in terms of explicit, quantified cost/benefits. A proactive program of trip reduction provides an excellent opportunity to accomplish human factors-related upgrades. As an evaluative context, trip reduction imposes a uniform goodness criterion onmore » all situations: the probability of inadvertent plant trip. This in turn means that findings can be compared in terms of a common quantitative reference point: the cost of an inadvertent shutdown. To interpret human factors deficiencies in terms of trip probabilities, the Technique for Human Error Rate Prediction (THERP) can be used. THERP provides an accessible compilation of human reliability data for generic, discrete task elements. Sequences of such values are combined in standard event trees to determine the probability of failure (e.g., trip) for a given evolution. THERP is widely accepted as one of the best available alternatives for assessing human reliability.« less

Top