Sample records for automatic quality assessment

  1. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  2. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  3. Automatic assessment of the quality of patient positioning in mammography

    NASA Astrophysics Data System (ADS)

    Bülow, Thomas; Meetz, Kirsten; Kutra, Dominik; Netsch, Thomas; Wiemker, Rafael; Bergtholdt, Martin; Sabczynski, Jörg; Wieberneit, Nataly; Freund, Manuela; Schulze-Wenck, Ingrid

    2013-02-01

    Quality assurance has been recognized as crucial for the success of population-based breast cancer screening programs using x-ray mammography. Quality guidelines and criteria have been defined in the US as well as the European Union in order to ensure the quality of breast cancer screening. Taplin et al. report that incorrect positioning of the breast is the major image quality issue in screening mammography. Consequently, guidelines and criteria for correct positioning and for the assessment of the positioning quality in mammograms play an important role in the quality standards. In this paper we present a system for the automatic evaluation of positioning quality in mammography according to the existing standardized criteria. This involves the automatic detection of anatomic landmarks in medio- lateral oblique (MLO) and cranio-caudal (CC) mammograms, namely the pectoral muscle, the mammilla and the infra-mammary fold. Furthermore, the detected landmarks are assessed with respect to their proper presentation in the image. Finally, the geometric relations between the detected landmarks are investigated to assess the positioning quality. This includes the evaluation whether the pectoral muscle is imaged down to the mammilla level, and whether the posterior nipple line diameter of the breast is consistent between the different views (MLO and CC) of the same breast. Results of the computerized assessment are compared to ground truth collected from two expert readers.

  4. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    PubMed

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Automatic assessment of voice quality according to the GRBAS scale.

    PubMed

    Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2006-01-01

    Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.

  6. Objective measures for quality assessment of automatic skin enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Ciuc, Mihai; Capata, Adrian; Florea, Corneliu

    2010-01-01

    Automatic portrait enhancement by attenuating skin flaws (pimples, blemishes, wrinkles, etc.) has received considerable attention from digital camera manufacturers thanks to its impact on the public. Subsequently, a number of algorithms have been developed to meet this need. One central aspect to developing such an algorithm is quality assessment: having a few numbers that precisely indicate the amount of beautification brought by an algorithm (as perceived by human observers) is of great help, as it works on circumvent time-costly human evaluation. In this paper, we propose a method to numerically evaluate the quality of a skin beautification algorithm. The most important aspects we take into account and quantize to numbers are the quality of the skin detector, the amount of smoothing performed by the method, the preservation of intrinsic skin texture, and the preservation of facial features. We combine these measures into two numbers that assess the quality of skin detection and beautification. The derived measures are highly correlated with human perception, therefore they constitute a helpful tool for tuning and comparing algorithms.

  7. Automatic quality assessment of apical four-chamber echocardiograms using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Abdi, Amir H.; Luong, Christina; Tsang, Teresa; Allan, Gregory; Nouranian, Saman; Jue, John; Hawley, Dale; Fleming, Sarah; Gin, Ken; Swift, Jody; Rohling, Robert; Abolmaesumi, Purang

    2017-02-01

    Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 +/- 0:72. The computation time of the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time deployment.

  8. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Assessment of automatic exposure control performance in digital mammography using a no-reference anisotropic quality index

    NASA Astrophysics Data System (ADS)

    Barufaldi, Bruno; Borges, Lucas R.; Bakic, Predrag R.; Vieira, Marcelo A. C.; Schiabel, Homero; Maidment, Andrew D. A.

    2017-03-01

    Automatic exposure control (AEC) is used in mammography to obtain acceptable radiation dose and adequate image quality regardless of breast thickness and composition. Although there are physics methods for assessing the AEC, it is not clear whether mammography systems operate with optimal dose and image quality in clinical practice. In this work, we propose the use of a normalized anisotropic quality index (NAQI), validated in previous studies, to evaluate the quality of mammograms acquired using AEC. The authors used a clinical dataset that consists of 561 patients and 1,046 mammograms (craniocaudal breast views). The results show that image quality is often maintained, even at various radiation levels (mean NAQI = 0.14 +/- 0.02). However, a more careful analysis of NAQI reveals that the average image quality decreases as breast thickness increases. The NAQI is reduced by 32% on average, when the breast thickness increases from 31 to 71 mm. NAQI also decreases with lower breast density. The variation in breast parenchyma alone cannot fully account for the decrease of NAQI with thickness. Examination of images shows that images of large, fatty breasts are often inadequately processed. This work shows that NAQI can be applied in clinical mammograms to assess mammographic image quality, and highlights the limitations of the automatic exposure control for some images.

  10. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  11. Image quality assessment of automatic three-segment MR attenuation correction vs. CT attenuation correction.

    PubMed

    Partovi, Sasan; Kohan, Andres; Gaeta, Chiara; Rubbert, Christian; Vercher-Conejero, Jose L; Jones, Robert S; O'Donnell, James K; Wojtylak, Patrick; Faulhaber, Peter

    2013-01-01

    The purpose of this study is to systematically evaluate the usefulness of Positron emission tomography/Magnetic resonance imaging (PET/MRI) images in a clinical setting by assessing the image quality of Positron emission tomography (PET) images using a three-segment MR attenuation correction (MRAC) versus the standard CT attenuation correction (CTAC). We prospectively studied 48 patients who had their clinically scheduled FDG-PET/CT followed by an FDG-PET/MRI. Three nuclear radiologists evaluated the image quality of CTAC vs. MRAC using a Likert scale (five-point scale). A two-sided, paired t-test was performed for comparison purposes. The image quality was further assessed by categorizing it as acceptable (equal to 4 and 5 on the five-point Likert scale) or unacceptable (equal to 1, 2, and 3 on the five-point Likert scale) quality using the McNemar test. When assessing the image quality using the Likert scale, one reader observed a significant difference between CTAC and MRAC (p=0.0015), whereas the other readers did not observe a difference (p=0.8924 and p=0.1880, respectively). When performing the grouping analysis, no significant difference was found between CTAC vs. MRAC for any of the readers (p=0.6137 for reader 1, p=1 for reader 2, and p=0.8137 for reader 3). All three readers more often reported artifacts on the MRAC images than on the CTAC images. There was no clinically significant difference in quality between PET images generated on a PET/MRI system and those from a Positron emission tomography/Computed tomography (PET/CT) system. PET images using the automatic three-segmented MR attenuation method provided diagnostic image quality. However, future research regarding the image quality obtained using different MR attenuation based methods is warranted before PET/MRI can be used clinically.

  12. Automatic quality control in clinical (1)H MRSI of brain cancer.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Knecht, Urspeter; Wiest, Roland; Slotboom, Johannes

    2016-05-01

    MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1)H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented

  13. Polarization transformation as an algorithm for automatic generalization and quality assessment

    NASA Astrophysics Data System (ADS)

    Qian, Haizhong; Meng, Liqiu

    2007-06-01

    Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.

  14. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established

  15. Reference-free automatic quality assessment of tracheoesophageal speech.

    PubMed

    Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip

    2009-01-01

    Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.

  16. Automatic retinal interest evaluation system (ARIES).

    PubMed

    Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang

    2014-01-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.

  17. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  18. Incorporating Learning Characteristics into Automatic Essay Scoring Models: What Individual Differences and Linguistic Features Tell Us about Writing Quality

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…

  19. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    PubMed

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  1. Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction

    PubMed Central

    Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong

    2010-01-01

    Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696

  2. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  3. Automatic, nondestructive test monitors in-process weld quality

    NASA Technical Reports Server (NTRS)

    Deal, F. C.

    1968-01-01

    Instrument automatically and nondestructively monitors the quality of welds produced in microresistance welding. It measures the infrared energy generated in the weld as the weld is made and compares this energy with maximum and minimum limits of infrared energy values previously correlated with acceptable weld-strength tolerances.

  4. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    PubMed

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement

  5. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  6. Machine learning approach for automatic quality criteria detection of health web pages.

    PubMed

    Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia

    2007-01-01

    The number of medical websites is constantly growing [1]. Owing to the open nature of the Web, the reliability of information available on the Web is uneven. Internet users are overwhelmed by the quantity of information available on the Web. The situation is even more critical in the medical area, as the content proposed by health websites can have a direct impact on the users' well being. One way to control the reliability of health websites is to assess their quality and to make this assessment available to users. The HON Foundation has defined a set of eight ethical principles. HON's experts are working in order to manually define whether a given website complies with s the required principles. As the number of medical websites is constantly growing, manual expertise becomes insufficient and automatic systems should be used in order to help medical experts. In this paper we present the design and the evaluation of an automatic system conceived for the categorisation of medical and health documents according to he HONcode ethical principles. A first evaluation shows promising results. Currently the system shows 0.78 micro precision and 0.73 F-measure, with 0.06 errors.

  7. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  8. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  9. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The

  10. Objective assessment of the aesthetic outcomes of breast cancer treatment: toward automatic localization of fiducial points on digital photographs

    NASA Astrophysics Data System (ADS)

    Udpa, Nitin; Sampat, Mehul P.; Kim, Min Soon; Reece, Gregory P.; Markey, Mia K.

    2007-03-01

    The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have been developed. They are computed from the distances between manually identified fiducial points on digital photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations determined from the image database is used to reduce the number of false positive detections from the matched filter operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on the calculation of the Breast Retraction Assessment was also evaluated.

  11. Quality Evaluation of Coatings by Automatic Scratch Testing

    DTIC Science & Technology

    1989-11-01

    MTL TR 89-98 IADII QUALITY EVALUATION OF COATINGS BY AUTOMATIC SCRATCH TESTING KIRIT J. BHANSALI LBRTR A1 U.S. ARMY MATERIALS TECHNOLOGY LABORATORY...distribution unlimited. LABORATORY COMMANO U.S. ARMY MATERIALS TECHNOLOGY LABORATORY PMUNKS wcamaauv LUaAMUv Watertown, Massachusetts 02172-0001 .o...Theo 7- Kattamis* 9 PEWNWING ORGANIZATION NAME AMO ADDRESS 1.PORUEEET RJC.TS AREA & WORK UNIT NUMSS U.S. Army Materials Technology Laboratory Watertown

  12. Automatic, Multiple Assessment Options in Undergraduate Meteorology Education

    ERIC Educational Resources Information Center

    Kahl, Jonathan D. W.

    2017-01-01

    Since 2008, automatic, multiple assessment options have been utilised in selected undergraduate meteorology courses at the University of Wisconsin--Milwaukee. Motivated by a desire to reduce stress among students, the assessment methodology includes examination-heavy and homework-heavy alternatives, differing by an adjustable 15% of the overall…

  13. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  14. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    PubMed

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  15. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    NASA Astrophysics Data System (ADS)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  16. Back-and-Forth Methodology for Objective Voice Quality Assessment: From/to Expert Knowledge to/from Automatic Classification of Dysphonia

    NASA Astrophysics Data System (ADS)

    Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine

    2009-12-01

    This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.

  17. Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.

    PubMed

    Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong

    2016-06-01

    Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.

  18. Self-assessing target with automatic feedback

    DOEpatents

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  19. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  20. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  1. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  2. Automatic film processors' quality control test in Greek military hospitals.

    PubMed

    Lymberis, C; Efstathopoulos, E P; Manetou, A; Poudridis, G

    1993-04-01

    The two major military radiology installations (Athens, Greece) using a total of 15 automatic film processors were assessed using the 21-step-wedge method. The results of quality control in all these processors are presented. The parameters measured under actual working conditions were base and fog, contrast and speed. Base and fog as well as speed displayed large variations with average values generally higher than acceptable, whilst contrast displayed greater stability. Developer temperature was measured daily during the test and was found to be outside the film manufacturers' recommended limits in nine of the 15 processors. In only one processor did film passing time vary on an every day basis and this was due to maloperation. Developer pH test was not part of the daily monitoring service being performed every 5 days for each film processor and found to be in the range 9-12; 10 of the 15 processors presented pH values outside the limits specified by the film manufacturers.

  3. Improving labeling efficiency in automatic quality control of MRSI data.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Wiest, Roland; Slotboom, Johannes

    2017-12-01

    To improve the efficiency of the labeling task in automatic quality control of MR spectroscopy imaging data. 28'432 short and long echo time (TE) spectra (1.5 tesla; point resolved spectroscopy (PRESS); repetition time (TR)= 1,500 ms) from 18 different brain tumor patients were labeled by two experts as either accept or reject, depending on their quality. For each spectrum, 47 signal features were extracted. The data was then used to run several simulations and test an active learning approach using uncertainty sampling. The performance of the classifiers was evaluated as a function of the number of patients in the training set, number of spectra in the training set, and a parameter α used to control the level of classification uncertainty required for a new spectrum to be selected for labeling. The results showed that the proposed strategy allows reductions of up to 72.97% for short TE and 62.09% for long TE in the amount of data that needs to be labeled, without significant impact in classification accuracy. Further reductions are possible with significant but minimal impact in performance. Active learning using uncertainty sampling is an effective way to increase the labeling efficiency for training automatic quality control classifiers. Magn Reson Med 78:2399-2405, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Retinal image quality assessment based on image clarity and content

    NASA Astrophysics Data System (ADS)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  5. Feasibility of automatic evaluation of clinical rules in general practice.

    PubMed

    Opondo, Dedan; Visscher, Stefan; Eslami, Saied; Medlock, Stephanie; Verheij, Robert; Korevaar, Joke C; Abu-Hanna, Ameen

    2017-04-01

    To assess the extent to which clinical rules (CRs) can be implemented for automatic evaluation of quality of care in general practice. We assessed 81 clinical rules (CRs) adapted from a subset of Assessing Care of Vulnerable Elders (ACOVE) clinical rules, against Dutch College of General Practitioners (NHG) data model. Each CR was analyzed using the Logical Elements Rule METHOD: (LERM). LERM is a stepwise method of assessing and formalizing clinical rules for decision support. Clinical rules that satisfied the criteria outlined in the LERM method were judged to be implementable in automatic evaluation in general practice. Thirty-three out of 81 (40.7%) Dutch-translated ACOVE clinical rules can be automatically evaluated in electronic medical record systems. Seven out of 7 CRs (100%) in the domain of diabetes can be automatically evaluated, 9/17 (52.9%) in medication use, 5/10 (50%) in depression care, 3/6 (50%) in nutrition care, 6/13 (46.1%) in dementia care, 1/6 (16.6%) in end of life care, 2/13 (15.3%) in continuity of care, and 0/9 (0%) in the fall-related care. Lack of documentation of care activities between primary and secondary health facilities and ambiguous formulation of clinical rules were the main reasons for the inability to automate the clinical rules. Approximately two-fifths of the primary care Dutch ACOVE-based clinical rules can be automatically evaluated. Clear definition of clinical rules, improved GP database design and electronic linkage of primary and secondary healthcare facilities can improve prospects of automatic assessment of quality of care. These findings are relevant especially because the Netherlands has very high automation of primary care. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Automatic evidence quality prediction to support evidence-based decision making.

    PubMed

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance

  7. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  8. A routine quality assurance test for CT automatic exposure control systems.

    PubMed

    Iball, Gareth R; Moore, Alexis C; Crawford, Elizabeth J

    2016-07-08

    The study purpose was to develop and validate a quality assurance test for CT automatic exposure control (AEC) systems based on a set of nested polymethylmethacrylate CTDI phantoms. The test phantom was created by offsetting the 16 cm head phantom within the 32 cm body annulus, thus creating a three part phantom. This was scanned at all acceptance, routine, and some nonroutine quality assurance visits over a period of 45 months, resulting in 115 separate AEC tests on scanners from four manufacturers. For each scan the longitudinal mA modulation pattern was generated and measurements of image noise were made in two annular regions of interest. The scanner displayed CTDIvol and DLP were also recorded. The impact of a range of AEC configurations on dose and image quality were assessed at acceptance testing. For systems that were tested more than once, the percentage of CTDIvol values exceeding 5%, 10%, and 15% deviation from baseline was 23.4%, 12.6%, and 8.1% respectively. Similarly, for the image noise data, deviations greater than 2%, 5%, and 10% from baseline were 26.5%, 5.9%, and 2%, respectively. The majority of CTDIvol and noise deviations greater than 15% and 5%, respectively, could be explained by incorrect phantom setup or protocol selection. Barring these results, CTDIvol deviations of greater than 15% from baseline were found in 0.9% of tests and noise deviations greater than 5% from baseline were found in 1% of tests. The phantom was shown to be sensitive to changes in AEC setup, including the use of 3D, longitudinal or rotational tube current modulation. This test methodology allows for continuing performance assessment of CT AEC systems, and we recommend that this test should become part of routine CT quality assurance programs. Tolerances of ± 15% for CTDIvol and ± 5% for image noise relative to baseline values should be used. © 2016 The Authors

  9. Validation of balance-quality assessment using a modified bathroom scale.

    PubMed

    Hewson, D J; Duchêne, J; Hogrel, J-Y

    2015-02-01

    The balance quality tester (BQT), based on a standard electronic bathroom scale has been developed in order to assess balance quality. The BQT includes automatic detection of the person to be tested by means of an infrared detector and bluetooth communication capability for remote assessment when linked to a long-distance communication device such as a mobile phone. The BQT was compared to a standard force plate for validity and agreement. The two most widely reported parameters in balance literature, the area of the centre of pressure (COP) displacement and the velocity of the COP displacement, were compared for 12 subjects, each of whom was tested on ten occasions on each of the 2 days. No significant differences were observed between the BQT and the force plate for either of the two parameters. In addition a high level of agreement was observed between both devices. The BQT is a valid device for remote assessment of balance quality, and could provide a useful tool for long-term monitoring of people with balance problems, particularly during home monitoring.

  10. AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia

    2017-03-14

    Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data

  11. Assessing the quality of radiographic processing in general dental practice.

    PubMed

    Thornley, P H; Stewardson, D A; Rout, P G J; Burke, F J T

    2006-05-13

    To determine if a commercial device (Vischeck) for monitoring film processing quality was a practical option in general dental practice, and to assess processing quality among a group of GDPs in the West Midlands with this device. Clinical evaluation. General dental practice, UK, 2004. Ten GDP volunteers from a practice based research group processed Vischeck strips (a) when chemicals were changed, (b) one week later, and (c) immediately before the next change of chemicals. These were compared with strips processed under ideal conditions. Additionally, a series of duplicate radiographs were produced and processed together with Vischeck strips in progressively more dilute developer solutions to compare the change in radiograph quality assessed clinically with that derived from the Vischeck. The Vischeck strips suggested that at the time chosen for change of processing chemicals, eight dentists had been processing films well beyond the point indicated for replacement. Solutions were changed after a wide range of time periods and number of films processed. The calibration of the Vischeck strip correlated closely to a clinical assessment of acceptable film quality. Vischeck strips are a useful aid to monitoring processing quality in automatic developers in general dental practice. Most of this group of GDPs were using chemicals beyond the point at which diagnostic yield would be affected.

  12. Comparison of two automatic methods for the assessment of brachial artery flow-mediated dilation.

    PubMed

    Faita, Francesco; Masi, Stefano; Loukogeorgakis, Stavros; Gemignani, Vincenzo; Okorie, Mike; Bianchini, Elisabetta; Charakida, Marietta; Demi, Marcello; Ghiadoni, Lorenzo; Deanfield, John Eric

    2011-01-01

    Brachial artery flow-mediated dilation (FMD) is associated with risk factors providing information on cardiovascular prognosis. Despite the large effort to standardize the methodology, the FMD examination is still characterized by problems of reproducibility and reliability that can be partially overcome with the use of automatic systems. We developed real-time software for the assessment of brachial FMD (FMD Studio, Institute of Clinical Physiology, Pisa, Italy) from ultrasound images. The aim of this study is to compare our system with another automatic method (Brachial Analyzer, MIA LLC, IA, USA) which is currently considered as a reference method in FMD assessment. The agreement between systems was assessed as follows. Protocol 1: Mean baseline (Basal), maximal (Max) brachial artery diameter after forearm ischemia and FMD, calculated as maximal percentage diameter increase, have been evaluated in 60 recorded FMD sequences. Protocol 2: Values of diameter and FMD have been evaluated in 618 frames extracted from 12 sequences. All biases are negligible and standard deviations of the differences are satisfactory (protocol 1: -0.27 ± 0.59%; protocol 2: -0.26 ± 0.61%) for FMD measurements. Analysis times were reduced (-33%) when FMD Studio is used. Rejected examinations due to the poor quality were 2% with the FMD Studio and 5% with the Brachial Analyzer. In conclusion, the compared systems show a optimal grade of agreement and they can be used interchangeably. Thus, the use of a system characterized by real-time functionalities could represent a referral method for assessing endothelial function in clinical trials.

  13. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Automatic Rotational Sky Quality Meter (R-SQM) Design and Software for Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Dogan, E.; Ozbaldan, E. E.; Shameoni, Niaei M.; Yesilyaprak, C.

    2016-12-01

    We have presented the new design of Sky Quality Meter (SQM) device that is an automatic rotational model of sky quality meter (R-SQM) carried out by DAG (Eastern Anatolia Observatory) Technical Team. R-SQM is required for determining the long-term changes of sky quality of an astronomical observatory and consists of four SQM devices mounted on a rotating shaft with different angles for scanning all sky. This system is controlled by a Raspberry Pi control card and a step motor with its driver and a special software.

  15. Identification of suitable fundus images using automated quality assessment methods.

    PubMed

    Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet

    2014-04-01

    Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.

  16. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  17. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.

    PubMed

    Marshall, Iain J; Kuiper, Joël; Wallace, Byron C

    2016-01-01

    To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  18. Quality assessment of protein model-structures using evolutionary conservation.

    PubMed

    Kalman, Matan; Ben-Tal, Nir

    2010-05-15

    Programs that evaluate the quality of a protein structural model are important both for validating the structure determination procedure and for guiding the model-building process. Such programs are based on properties of native structures that are generally not expected for faulty models. One such property, which is rarely used for automatic structure quality assessment, is the tendency for conserved residues to be located at the structural core and for variable residues to be located at the surface. We present ConQuass, a novel quality assessment program based on the consistency between the model structure and the protein's conservation pattern. We show that it can identify problematic structural models, and that the scores it assigns to the server models in CASP8 correlate with the similarity of the models to the native structure. We also show that when the conservation information is reliable, the method's performance is comparable and complementary to that of the other single-structure quality assessment methods that participated in CASP8 and that do not use additional structural information from homologs. A perl implementation of the method, as well as the various perl and R scripts used for the analysis are available at http://bental.tau.ac.il/ConQuass/. nirb@tauex.tau.ac.il Supplementary data are available at Bioinformatics online.

  19. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  20. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  1. A Risk Assessment System with Automatic Extraction of Event Types

    NASA Astrophysics Data System (ADS)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  2. Feasibility Study on Fully Automatic High Quality Translation: Volume I. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    The object of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of past and projected advances in linguistic theory and hardware/software capability. This first volume of a two-volume report discusses the requirements of translation and aspects of human and machine…

  3. Assessment of Automatic Fare Collection Equipment at Three European Transit Properties

    DOT National Transportation Integrated Search

    1982-12-01

    This report is an assessment of automatic fare collection (AFC) equipment performance conducted at three European properties in accordance with procedures defined in the Property Evaluation Plan (PEP) developed by Input Output Computer Services, Inc....

  4. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  5. Improving the Quality of Welding Seam of Automatic Welding of Buckets Based on TCP

    NASA Astrophysics Data System (ADS)

    Hu, Min

    2018-02-01

    Since February 2014, the welding defects of the automatic welding line of buckets have been frequently appeared. The average repair time of each bucket is 26min, which seriously affects the production efficiency and welding quality. We conducted troubleshooting, and found the main reasons for the welding defects of the buckets were the deviations of the center points of the robot tools and the poor quality of the locating welding. We corrected the gripper, welding torch, and accuracy of repeat positioning of robots to control the quality of positioning welding. The welding defect rate of buckets was reduced greatly, ensuring the production efficiency and welding quality.

  6. Assessing the quality of restored images in optical long-baseline interferometry

    NASA Astrophysics Data System (ADS)

    Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric

    2017-03-01

    Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.

  7. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  8. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  9. A framework for automatic information quality ranking of diabetes websites.

    PubMed

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  10. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    PubMed

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  11. Validation of a Visual-Spatial Secondary Task to Assess Automaticity in Laparoscopic Skills.

    PubMed

    Castillo, Richard; Alvarado, Juan; Moreno, Pablo; Billeke, Pablo; Martínez, Carlos; Varas, Julián; Jarufe, Nicolás

    2017-12-26

    Our objective was to assess reliability and validity of a visual-spatial secondary task (VSST) as a method to measure automaticity on a basic simulated laparoscopic skill model. In motor skill acquisition, expertise is defined by automaticity. The highest level of performance with less cognitive and attentional resources characterizes this stage, allowing experts to perform multiple tasks. Conventional validated parameters as operative time, objective assessment skills scales (OSATS), and movement economy, are insufficient to distinguish if an individual has reached the more advanced learning phases, such as automaticity. There is literature about using a VSST as an attention indicator that correlates with the automaticity level. Novices with completed and approved Fundamentals of Laparoscopic Surgery course, and laparoscopy experts were enrolled for an experimental study and measured under dual tasks conditions. Each participant performed the test giving priority to the primary task while at the same time they responded to a VSST. The primary task consisted of 4 interrupted laparoscopic stitches (ILS) on a bench-model. The VSST was a screen that showed different patterns that the surgeon had to recognize and press a pedal while doing the stitches (PsychoPsy software, Python, MacOS). Novices were overtrained on ILS until they reach at least 100 repetitions and then were retested. Participants were video recorded and then assessed by 2 blinded evaluators who measured operative time and OSATS. These scores were considered indicators of quality for the primary task. The VSST performance was measured by the detectability index (DI), which is a ratio between correct and wrong detections. A reliable evaluation was defined as two measures of DI with less than 10% of difference, maintaining the cutoff scores for performance on the primary task (operative time <110 seg and OSATS >17 points). Novices (n = 11) achieved reliable measure of the test after 2 (2-5) repetitions on

  12. Socioeconomic Impact Assessment of the Los Angeles Automatic Vehicle Monitoring (AVM) Demonstration

    DOT National Transportation Integrated Search

    1982-09-01

    This report presents a socioeconomic impact assessment of the Automatic Vehicle Monitoring (AVM) Demonstration in Los Angeles. An AVM system uses location, communication, and data processing subsystems to monitor the locations of appropriately equipp...

  13. Assessment of ambient air quality in Eskişehir, Turkey.

    PubMed

    Ozden, O; Döğeroğlu, T; Kara, S

    2008-07-01

    This paper presents an assessment of air quality of the city Eskişehir, located 230 km southwest to the capital of Turkey. Only five of the major air pollutants, most studied worldwide and available for the region, were considered for the assessment. Available sulphur dioxide (SO(2)), particulate matter (PM), nitrogen dioxide (NO(2)), ozone (O(3)), and non-methane volatile organic carbons (NMVOCs) data from local emission inventory studies provided relative source contributions of the selected pollutants to the region. The contributions of these typical pollution parameters, selected for characterizing such an urban atmosphere, were compared with the data established for other cities in the nation and world countries. Additionally, regional ambient SO(2) and PM concentrations, determined by semi-automatic monitoring at two sites, were gathered from the National Ambient Air Monitoring Network (NAAMN). Regional data for ambient NO(2) (as a precursor of ozone as VOCs) and ozone concentrations, through the application of the passive sampling method, were provided by the still ongoing local air quality monitoring studies conducted at six different sites, as representatives of either the traffic-dense-, or coal/natural gas burning residential-, or industrial/rural-localities of the city. Passively sampled ozone data at a single rural site were also verified with the data from a continuous automatic ozone monitoring system located at that site. Effects of variations in seasonal-activities, newly established railway system, and switching to natural gas usage on the temporal changes of air quality were all considered for the assessment. Based on the comparisons with the national [AQCR (Air Quality Control Regulation). Ministry of Environment (MOE), Ankara. Official Newspaper 19269; 1986.] and a number of international [WHO (World Health Organization). Guidelines for Air Quality. Geneva; 2000. Downloaded in January 2006, website: http://www.who.int/peh/; EU (European Union

  14. Image quality assessment for video stream recognition systems

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  15. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  16. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-04-03

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction.

  17. SIMULATING LOCAL DENSE AREAS USING PMMA TO ASSESS AUTOMATIC EXPOSURE CONTROL IN DIGITAL MAMMOGRAPHY.

    PubMed

    Bouwman, R W; Binst, J; Dance, D R; Young, K C; Broeders, M J M; den Heeten, G J; Veldkamp, W J H; Bosmans, H; van Engen, R E

    2016-06-01

    Current digital mammography (DM) X-ray systems are equipped with advanced automatic exposure control (AEC) systems, which determine the exposure factors depending on breast composition. In the supplement of the European guidelines for quality assurance in breast cancer screening and diagnosis, a phantom-based test is included to evaluate the AEC response to local dense areas in terms of signal-to-noise ratio (SNR). This study evaluates the proposed test in terms of SNR and dose for four DM systems. The glandular fraction represented by the local dense area was assessed by analytic calculations. It was found that the proposed test simulates adipose to fully glandular breast compositions in attenuation. The doses associated with the phantoms were found to match well with the patient dose distribution. In conclusion, after some small adaptations, the test is valuable for the assessment of the AEC performance in terms of both SNR and dose. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Assessment of voice, speech, and related quality of life in advanced head and neck cancer patients 10-years+ after chemoradiotherapy.

    PubMed

    Kraaijenga, S A C; Oskam, I M; van Son, R J J H; Hamming-Vrieze, O; Hilgers, F J M; van den Brekel, M W M; van der Molen, L

    2016-04-01

    Assessment of long-term objective and subjective voice, speech, articulation, and quality of life in patients with head and neck cancer (HNC) treated with concurrent chemoradiotherapy (CRT) for advanced, stage IV disease. Twenty-two disease-free survivors, treated with cisplatin-based CRT for inoperable HNC (1999-2004), were evaluated at 10-years post-treatment. A standard Dutch text was recorded. Perceptual analysis of voice, speech, and articulation was conducted by two expert listeners (SLPs). Also an experimental expert system based on automatic speech recognition was used. Patients' perception of voice and speech and related quality of life was assessed with the Voice Handicap Index (VHI) and Speech Handicap Index (SHI) questionnaires. At a median follow-up of 11-years, perceptual evaluation showed abnormal scores in up to 64% of cases, depending on the outcome parameter analyzed. Automatic assessment of voice and speech parameters correlated moderate to strong with perceptual outcome scores. Patient-reported problems with voice (VHI>15) and speech (SHI>6) in daily life were present in 68% and 77% of patients, respectively. Patients treated with IMRT showed significantly less impairment compared to those treated with conventional radiotherapy. More than 10-years after organ-preservation treatment, voice and speech problems are common in this patient cohort, as assessed with perceptual evaluation, automatic speech recognition, and with validated structured questionnaires. There were fewer complaints in patients treated with IMRT than with conventional radiotherapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A Regression-Based Family of Measures for Full-Reference Image Quality Assessment

    NASA Astrophysics Data System (ADS)

    Oszust, Mariusz

    2016-12-01

    The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.

  20. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  1. Automatic Classification of Station Quality by Image Based Pattern Recognition of Ppsd Plots

    NASA Astrophysics Data System (ADS)

    Weber, B.; Herrnkind, S.

    2017-12-01

    The number of seismic stations is growing and it became common practice to share station waveform data in real-time with the main data centers as IRIS, GEOFON, ORFEUS and RESIF. This made analyzing station performance of increasing importance for automatic real-time processing and station selection. The value of a station depends on different factors as quality and quantity of the data, location of the site and general station density in the surrounding area and finally the type of application it can be used for. The approach described by McNamara and Boaz (2006) became standard in the last decade. It incorporates a probability density function (PDF) to display the distribution of seismic power spectral density (PSD). The low noise model (LNM) and high noise model (HNM) introduced by Peterson (1993) are also displayed in the PPSD plots introduced by McNamara and Boaz allowing an estimation of the station quality. Here we describe how we established an automatic station quality classification module using image based pattern recognition on PPSD plots. The plots were split into 4 bands: short-period characteristics (0.1-0.8 s), body wave characteristics (0.8-5 s), microseismic characteristics (5-12 s) and long-period characteristics (12-100 s). The module sqeval connects to a SeedLink server, checks available stations, requests PPSD plots through the Mustang service from IRIS or PQLX/SQLX or from GIS (gempa Image Server), a module to generate different kind of images as trace plots, map plots, helicorder plots or PPSD plots. It compares the image based quality patterns for the different period bands with the retrieved PPSD plot. The quality of a station is divided into 5 classes for each of the 4 bands. Classes A, B, C, D define regular quality between LNM and HNM while the fifth class represents out of order stations with gain problems, missing data etc. Over all period bands about 100 different patterns are required to classify most of the stations available on the

  2. Automatic MeSH term assignment and quality assessment.

    PubMed Central

    Kim, W.; Aronson, A. R.; Wilbur, W. J.

    2001-01-01

    For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203

  3. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  4. Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease

    PubMed Central

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667

  5. Automatic and objective assessment of alternating tapping performance in Parkinson's disease.

    PubMed

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-12-09

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.

  6. Automatic portion estimation and visual refinement in mobile dietary assessment

    PubMed Central

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2011-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198

  7. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  8. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  9. Quality of Life Effects of Automatic External Defibrillators in the Home: Results from the Home Automatic External Defibrillator Trial (HAT)

    PubMed Central

    Mark, Daniel B.; Anstrom, Kevin J.; McNulty, Steven E.; Flaker, Greg C.; Tonkin, Andrew M.; Smith, Warren M.; Toff, William D.; Dorian, Paul; Clapp-Channing, Nancy E.; Anderson, Jill; Johnson, George; Schron, Eleanor B.; Poole, Jeanne E.; Lee, Kerry L.; Bardy, Gust H.

    2010-01-01

    Background Public access automatic external defibrillators (AEDs) can save lives, but most deaths from out-of-hospital sudden cardiac arrest occur at home. The Home Automatic External Defibrillator Trial (HAT) found no survival advantage for adding a home AED to cardiopulmonary resuscitation (CPR) training for 7001 patients with a prior anterior wall myocardial infarction. Quality of life (QOL) outcomes for both the patient and spouse/companion were secondary endpoints. Methods A subset of 1007 study patients and their spouse/companions was randomly selected for ascertainment of QOL by structured interview at baseline and 12 and 24 months following enrollment. The primary QOL measures were the Medical Outcomes Study 36-Item Short-Form (SF-36) psychological well-being (reflecting anxiety and depression) and vitality (reflecting energy and fatigue) subscales. Results For patients and spouse/companions, the psychological well-being and vitality scales did not differ significantly between those randomly assigned an AED plus CPR training and controls who received CPR training only. None of the other QOL measures collected showed a clinically and statistically significant difference between treatment groups. Patients in the AED group were more likely to report being extremely or quite a bit reassured by their treatment assignment. Spouse/companions in the AED group reported being less often nervous about the possibility of using AED/CPR treatment than those in the CPR group. Conclusions Adding access to a home AED to CPR training did not affect quality of life either for patients with a prior anterior myocardial infarction or their spouse/companion but did provide more reassurance to the patients without increasing anxiety for spouse/companions. PMID:20362722

  10. The automatic component of habit in health behavior: habit as cue-contingent automaticity.

    PubMed

    Orbell, Sheina; Verplanken, Bas

    2010-07-01

    Habit might be usefully characterized as a form of automaticity that involves the association of a cue and a response. Three studies examined habitual automaticity in regard to different aspects of the cue-response relationship characteristic of unhealthy and healthy habits. In each study, habitual automaticity was assessed by the Self-Report Habit Index (SRHI). In Study 1 SRHI scores correlated with attentional bias to smoking cues in a Stroop task. Study 2 examined the ability of a habit cue to elicit an unwanted habit response. In a prospective field study, habitual automaticity in relation to smoking when drinking alcohol in a licensed public house (pub) predicted the likelihood of cigarette-related action slips 2 months later after smoking in pubs had become illegal. In Study 3 experimental group participants formed an implementation intention to floss in response to a specified situational cue. Habitual automaticity of dental flossing was rapidly enhanced compared to controls. The studies provided three different demonstrations of the importance of cues in the automatic operation of habits. Habitual automaticity assessed by the SRHI captured aspects of a habit that go beyond mere frequency or consistency of the behavior. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  11. Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score

    PubMed Central

    Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly

    2013-01-01

    Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639

  12. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    NASA Astrophysics Data System (ADS)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  13. Quality assessment of protein model-structures based on structural and functional similarities

    PubMed Central

    2012-01-01

    Background Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. Results GOBA - Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. Conclusions The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best

  14. Quality assessment of protein model-structures based on structural and functional similarities.

    PubMed

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and

  15. Graphonomics, Automaticity and Handwriting Assessment

    ERIC Educational Resources Information Center

    Tucha, Oliver; Tucha, Lara; Lange, Klaus W.

    2008-01-01

    A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate…

  16. Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing

    NASA Astrophysics Data System (ADS)

    LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.

    2017-12-01

    With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.

  17. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  18. Automatic 3D Moment tensor inversions for southern California earthquakes

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.

    2008-12-01

    We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.

  19. A data-driven approach for quality assessment of radiologic interpretations.

    PubMed

    Hsu, William; Han, Simon X; Arnold, Corey W; Bui, Alex At; Enzmann, Dieter R

    2016-04-01

    Given the increasing emphasis on delivering high-quality, cost-efficient healthcare, improved methodologies are needed to measure the accuracy and utility of ordered diagnostic examinations in achieving the appropriate diagnosis. Here, we present a data-driven approach for performing automated quality assessment of radiologic interpretations using other clinical information (e.g., pathology) as a reference standard for individual radiologists, subspecialty sections, imaging modalities, and entire departments. Downstream diagnostic conclusions from the electronic medical record are utilized as "truth" to which upstream diagnoses generated by radiology are compared. The described system automatically extracts and compares patient medical data to characterize concordance between clinical sources. Initial results are presented in the context of breast imaging, matching 18 101 radiologic interpretations with 301 pathology diagnoses and achieving a precision and recall of 84% and 92%, respectively. The presented data-driven method highlights the challenges of integrating multiple data sources and the application of information extraction tools to facilitate healthcare quality improvement. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Automatic first-arrival picking based on extended super-virtual interferometry with quality control procedure

    NASA Astrophysics Data System (ADS)

    An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao

    2017-12-01

    Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.

  1. Automated Assessment of the Quality of Depression Websites

    PubMed Central

    Tang, Thanh Tin; Hawking, David; Christensen, Helen

    2005-01-01

    Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than

  2. Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775

    ERIC Educational Resources Information Center

    Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard

    2010-01-01

    Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic"…

  3. Automatic Human Movement Assessment With Switching Linear Dynamic System: Motion Segmentation and Motor Performance.

    PubMed

    de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro

    2017-06-01

    Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).

  4. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  5. dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs.

    PubMed

    Ma, Kede; Liu, Wentao; Liu, Tongliang; Wang, Zhou; Tao, Dacheng

    2017-05-26

    Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting largescale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain.

  6. Automatic sample Dewar for MX beam-line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charignon, T.; Tanchon, J.; Trollier, T.

    2014-01-29

    It is very common for crystals of large biological macromolecules to show considerable variation in quality of their diffraction. In order to increase the number of samples that are tested for diffraction quality before any full data collections at the ESRF*, an automatic sample Dewar has been implemented. Conception and performances of the Dewar are reported in this paper. The automatic sample Dewar has 240 samples capability with automatic loading/unloading ports. The storing Dewar is capable to work with robots and it can be integrated in a full automatic MX** beam-line. The samples are positioned in the front of themore » loading/unloading ports with and automatic rotating plate. A view port has been implemented for data matrix camera reading on each sample loaded in the Dewar. At last, the Dewar is insulated with polyurethane foam that keeps the liquid nitrogen consumption below 1.6 L/h. At last, the static insulation also makes vacuum equipment and maintenance unnecessary. This Dewar will be useful for increasing the number of samples tested in synchrotrons.« less

  7. preAssemble: a tool for automatic sequencer trace data processing.

    PubMed

    Adzhubei, Alexei A; Laerdahl, Jon K; Vlasova, Anna V

    2006-01-17

    Trace or chromatogram files (raw data) are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling). This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages--Phred and Staden are used by preAssemble to perform sequence quality processing. The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace) data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence processing.

  8. Saliency image of feature building for image quality assessment

    NASA Astrophysics Data System (ADS)

    Ju, Xinuo; Sun, Jiyin; Wang, Peng

    2011-11-01

    The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.

  9. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  10. Automatic information timeliness assessment of diabetes web sites by evidence based medicine.

    PubMed

    Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba

    2014-11-01

    Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Effects of Multisensory Environments on Stereotyped Behaviours Assessed as Maintained by Automatic Reinforcement

    ERIC Educational Resources Information Center

    Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio

    2012-01-01

    Background: The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Method: Stereotyped and engaged behaviours of two young people with severe…

  12. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  13. Automatic welding of stainless steel tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    The use of automatic welding for making girth welds in stainless steel tubing was investigated as well as the reduction in fabrication costs resulting from the elimination of radiographic inspection. Test methodology, materials, and techniques are discussed, and data sheets for individual tests are included. Process variables studied include welding amperes, revolutions per minute, and shielding gas flow. Strip chart recordings, as a definitive method of insuring weld quality, are studied. Test results, determined by both radiographic and visual inspection, are presented and indicate that once optimum welding procedures for specific sizes of tubing are established, and the welding machine operations are certified, then the automatic tube welding process produces good quality welds repeatedly, with a high degree of reliability. Revised specifications for welding tubing using the automatic process and weld visual inspection requirements at the Kennedy Space Center are enumerated.

  14. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  15. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  16. Automatic anterior chamber angle assessment for HD-OCT images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Wong, Hong-Tym; Aung, Tin

    2011-11-01

    Angle-closure glaucoma is a major blinding eye disease and could be detected by measuring the anterior chamber angle in the human eyes. High-definition OCT (Cirrus HD-OCT) is an emerging noninvasive, high-speed, and high-resolution imaging modality for the anterior segment of the eye. Here, we propose a novel algorithm which automatically detects a new landmark, Schwalbe's line, and measures the anterior chamber angle in the HD-OCT images. The distortion caused by refraction is corrected by dewarping the HD-OCT images, and three biometric measurements are defined to quantitatively assess the anterior chamber angle. The proposed algorithm was tested on 40 HD-OCT images of the eye and provided accurate measurements in about 1 second.

  17. The Northeast Stream Quality Assessment

    USGS Publications Warehouse

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  18. Automatic Welding of Stainless Steel Tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    To determine if the use of automatic welding would allow reduction of the radiographic inspection requirement, and thereby reduce fabrication costs, a series of welding tests were performed. In these tests an automatic welder was used on stainless steel tubing of 1/2, 3/4, and 1/2 inch diameter size. The optimum parameters were investigated to determine how much variation from optimum in machine settings could be tolerate and still result in a good quality weld. The process variables studied were the welding amperes, the revolutions per minute as a function of the circumferential weld travel speed, and the shielding gas flow. The investigation showed that the close control of process variables in conjunction with a thorough visual inspection of welds can be relied upon as an acceptable quality assurance procedure, thus permitting the radiographic inspection to be reduced by a large percentage when using the automatic process.

  19. Quality assessment of urban environment

    NASA Astrophysics Data System (ADS)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  20. Concepts of Quality in Student Assessment.

    ERIC Educational Resources Information Center

    Harlen, Wynne

    This paper gives an overview of the methods of moderation, or quality assurance and quality control, as they may be more widely known, that are used to enhance the quality of student assessment. The discussion is based on the educational systems of the United Kingdom but is applicable to assessment in other countries. Quality in assessment is seen…

  1. Semi-automatic assessment of skin capillary density: proof of principle and validation.

    PubMed

    Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M

    2013-11-01

    Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the

  2. The California stream quality assessment

    USGS Publications Warehouse

    Van Metre, Peter C.; Egler, Amanda L.; May, Jason T.

    2017-03-06

    In 2017, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) project is assessing stream quality in coastal California, United States. The USGS California Stream Quality Assessment (CSQA) will sample streams over most of the Central California Foothills and Coastal Mountains ecoregion (modified from Griffith and others, 2016), where rapid urban growth and intensive agriculture in the larger river valleys are raising concerns that stream health is being degraded. Findings will provide the public and policy-makers with information regarding which human and natural factors are the most critical in affecting stream quality and, thus, provide insights about possible approaches to protect the health of streams in the region.

  3. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  4. Mobile sailing robot for automatic estimation of fish density and monitoring water quality.

    PubMed

    Koprowski, Robert; Wróbel, Zygmunt; Kleszcz, Agnieszka; Wilczyński, Sławomir; Woźnica, Andrzej; Łozowski, Bartosz; Pilarczyk, Maciej; Karczewski, Jerzy; Migula, Paweł

    2013-07-01

    The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health.

  5. Mimicry and automatic imitation are not correlated

    PubMed Central

    van Den Bossche, Sofie; Cracco, Emiel; Bardi, Lara; Rigoni, Davide; Brass, Marcel

    2017-01-01

    It is widely known that individuals have a tendency to imitate each other. However, different psychological disciplines assess imitation in different manners. While social psychologists assess mimicry by means of action observation, cognitive psychologists assess automatic imitation with reaction time based measures on a trial-by-trial basis. Although these methods differ in crucial methodological aspects, both phenomena are assumed to rely on similar underlying mechanisms. This raises the fundamental question whether mimicry and automatic imitation are actually correlated. In the present research we assessed both phenomena and did not find a meaningful correlation. Moreover, personality traits such as empathy, autism traits, and traits related to self- versus other-focus did not correlate with mimicry or automatic imitation either. Theoretical implications are discussed. PMID:28877197

  6. Improving CCTA-based lesions' hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation.

    PubMed

    Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran

    2017-03-01

    The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast

  7. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.

  8. A Comparison between Two Automatic Assessment Approaches for Programming: An Empirical Study on MOOCs

    ERIC Educational Resources Information Center

    Bey, Anis; Jermann, Patrick; Dillenbourg, Pierre

    2018-01-01

    Computer-graders have been in regular use in the context of MOOCs (Massive Open Online Courses). The automatic grading of programs presents an opportunity to assess and provide tailored feedback to large classes, while featuring at the same time a number of benefits like: immediate feedback, unlimited submissions, as well as low cost of feedback.…

  9. From assessment to improvement of elderly care in general practice using decision support to increase adherence to ACOVE quality indicators: study protocol for randomized control trial

    PubMed Central

    2014-01-01

    Background Previous efforts such as Assessing Care of Vulnerable Elders (ACOVE) provide quality indicators for assessing the care of elderly patients, but thus far little has been done to leverage this knowledge to improve care for these patients. We describe a clinical decision support system to improve general practitioner (GP) adherence to ACOVE quality indicators and a protocol for investigating impact on GPs’ adherence to the rules. Design We propose two randomized controlled trials among a group of Dutch GP teams on adherence to ACOVE quality indicators. In both trials a clinical decision support system provides un-intrusive feedback appearing as a color-coded, dynamically updated, list of items needing attention. The first trial pertains to real-time automatically verifiable rules. The second trial concerns non-automatically verifiable rules (adherence cannot be established by the clinical decision support system itself, but the GPs report whether they will adhere to the rules). In both trials we will randomize teams of GPs caring for the same patients into two groups, A and B. For the automatically verifiable rules, group A GPs receive support only for a specific inter-related subset of rules, and group B GPs receive support only for the remainder of the rules. For non-automatically verifiable rules, group A GPs receive feedback framed as actions with positive consequences, and group B GPs receive feedback framed as inaction with negative consequences. GPs indicate whether they adhere to non-automatically verifiable rules. In both trials, the main outcome measure is mean adherence, automatically derived or self-reported, to the rules. Discussion We relied on active end-user involvement in selecting the rules to support, and on a model for providing feedback displayed as color-coded real-time messages concerning the patient visiting the GP at that time, without interrupting the GP’s workflow with pop-ups. While these aspects are believed to increase

  10. Adding Automatic Evaluation to Interactive Virtual Labs

    ERIC Educational Resources Information Center

    Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián

    2016-01-01

    Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…

  11. Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers

    NASA Astrophysics Data System (ADS)

    Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly

    2018-03-01

    The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.

  12. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  13. Monitoring River Water Levels from Space: Quality Assessment of 20 Years of Satellite Altimetry Data

    NASA Astrophysics Data System (ADS)

    Bercher, Nicolas; Kosuth, Pascal

    2013-09-01

    This paper presents the results of 20 years of validation of altimetry data for the monitoring of river water levels using a standardized method. The method was initially developed by Cemagref (2006-2011, [5, 6, 3]), now Irste ´a, its implementation is now pursued at LEGOS.Our initial statement was: "what if someone1 wants to use satellite measurements of river water levels ?" The obvious question that comes to mind is "what the quality of the data ?". Moreover, there's also a need - a demand from data producers, to monitor products quality in a standardized fashion.We addressed such questions and have developped a method to assess the quality of, so called, "Alti-Hydro Products". The method was implemented for the following Alti-Hydro products (and automatically derived from a L2 product*) : AVISO* (Topex/Poseidon, Jason-2), CASH project (Topex/Poseidon), HydroWeb (Topex/Poseidon, ENVISAT), River & Lake Hydrology (ERS-2, ENVISAT) and PISTACH* (Jason-2).

  14. Engineering studies related to Skylab program. [assessment of automatic gain control data

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.

    1973-01-01

    The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.

  15. The microbiological quality of pasteurized milk sold by automatic vending machines.

    PubMed

    Angelidis, A S; Tsiota, S; Pexara, A; Govaris, A

    2016-06-01

    The microbiological quality of pasteurized milk samples (n = 39) collected during 13 weekly intervals from three automatic vending machines (AVM) in Greece was investigated. Microbiological counts (total aerobic (TAC), total psychrotrophic (TPC), Enterobacteriaceae (EC), and psychrotrophic aerobic bacterial spore counts (PABSC)) were obtained at the time of sampling and at the end of shelf-life (3 days) after storage of the samples at 4 or 8°C. TAC were found to be below the 10(7 ) CFU ml(-1) limit of pasteurized milk spoilage both during sampling as well as when milk samples were stored at either storage temperature for 3 days. Enterobacteriaceae populations were below 1 CFU ml(-1) in 69·2% of the samples tested at the time of sampling, whereas the remaining samples contained low numbers, typically less than 10 CFU ml(-1) . All samples tested negative for the presence of Listeria monocytogenes. Analogous microbiological data were also obtained by sampling and testing prepackaged, retail samples of pasteurized milk from two dairy companies in Greece (n = 26). From a microbiological standpoint, the data indicate that the AVM milk samples meet the quality standards of pasteurized milk. However, the prepackaged, retail milk samples yielded better results in terms of TAC, TPC and EC, compared to the AVM samples at the end of shelf-life. Recently, Greek dairy farmers organized in cooperatives launched the sale of pasteurized milk via AVM and this study reports on the microbiological quality of this product. The data show that AVM milk is sold at proper refrigeration temperatures and meets the quality standards of pasteurized milk throughout the manufacturer's specified shelf-life. However, based on the microbiological indicators tested, the keeping quality of the tested prepackaged, retail samples of pasteurized milk at the end of shelf-life upon storage under suboptimal refrigeration temperature (8°C) was better. © 2016 The Society for Applied

  16. Grinding Parts For Automatic Welding

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.; Hoult, William S.

    1989-01-01

    Rollers guide grinding tool along prospective welding path. Skatelike fixture holds rotary grinder or file for machining large-diameter rings or ring segments in preparation for welding. Operator grasps handles to push rolling fixture along part. Rollers maintain precise dimensional relationship so grinding wheel cuts precise depth. Fixture-mounted grinder machines surface to quality sufficient for automatic welding; manual welding with attendant variations and distortion not necessary. Developed to enable automatic welding of parts, manual welding of which resulted in weld bead permeated with microscopic fissures.

  17. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholey, J; White, B; Qi, S

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the imagemore » intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the

  18. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  19. Automatic Control of Silicon Melt Level

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Stickel, W. B.

    1982-01-01

    A new circuit, when combined with melt-replenishment system and melt level sensor, offers continuous closed-loop automatic control of melt-level during web growth. Installed on silicon-web furnace, circuit controls melt-level to within 0.1 mm for as long as 8 hours. Circuit affords greater area growth rate and higher web quality, automatic melt-level control also allows semiautomatic growth of web over long periods which can greatly reduce costs.

  20. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    ERIC Educational Resources Information Center

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  1. Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto

    2012-02-01

    Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.

  2. Computer vision for automatic inspection of agricultural produce

    NASA Astrophysics Data System (ADS)

    Molto, Enrique; Blasco, Jose; Benlloch, Jose V.

    1999-01-01

    Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.

  3. Assessment of automatic ligand building in ARP/wARP.

    PubMed

    Evrard, Guillaume X; Langer, Gerrit G; Perrakis, Anastassis; Lamzin, Victor S

    2007-01-01

    The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein-ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.

  4. Health smart home: towards an assistant tool for automatic assessment of the dependence of elders.

    PubMed

    Le, Xuan Hoa Binh; Di Mascolo, Maria; Gouin, Alexia; Noury, Norbert

    2007-01-01

    In order to help elders living alone to age in place independently and safely, it can be useful to have an assistant tool that can automatically assess their dependence and issue an alert if there is any loss of autonomy. The dependence can be assessed by the degree of performance, by the elders, of activities of daily living. This article presents an approach enabling the activity recognition for an elder living alone in a Health Smart Home equipped with noninvasive sensors.

  5. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Additional quality assessment activities. 460.140... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional quality assessment activities. A PACE organization must meet external quality assessment and reporting requirements...

  6. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  7. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  8. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    NASA Technical Reports Server (NTRS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  9. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  10. Research Notes - Openness and Evolvability - Documentation Quality Assessment

    DTIC Science & Technology

    2016-08-01

    UNCLASSIFIED UNCLASSIFIED Notes – Openness and Evolvability – Documentation Quality Assessment Michael Haddy* and Adam Sbrana...Methods and Processes. This set of Research Notes focusses on Documentation Quality Assessment. This work was undertaken from the late 1990s to 2007...1 2. DOCUMENTATION QUALITY ASSESSMENT ......................................................... 1 2.1 Documentation Quality Assessment

  11. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  12. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  13. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  14. Automatic-Control System for Safer Brazing

    NASA Technical Reports Server (NTRS)

    Stein, J. A.; Vanasse, M. A.

    1986-01-01

    Automatic-control system for radio-frequency (RF) induction brazing of metal tubing reduces probability of operator errors, increases safety, and ensures high-quality brazed joints. Unit combines functions of gas control and electric-power control. Minimizes unnecessary flow of argon gas into work area and prevents electrical shocks from RF terminals. Controller will not allow power to flow from RF generator to brazing head unless work has been firmly attached to head and has actuated micro-switch. Potential shock hazard eliminated. Flow of argon for purging and cooling must be turned on and adjusted before brazing power applied. Provision ensures power not applied prematurely, causing damaged work or poor-quality joints. Controller automatically turns off argon flow at conclusion of brazing so potentially suffocating gas does not accumulate in confined areas.

  15. Automatic graphene transfer system for improved material quality and efficiency

    PubMed Central

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  16. Water Quality Assessment and Management

    EPA Pesticide Factsheets

    Overview of Clean Water Act (CWA) restoration framework including; water quality standards, monitoring/assessment, reporting water quality status, TMDL development, TMDL implementation (point & nonpoint source control)

  17. Image enhancement and quality measures for dietary assessment using mobile devices

    NASA Astrophysics Data System (ADS)

    Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2012-03-01

    Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods.

  18. Image Enhancement and Quality Measures for Dietary Assessment Using Mobile Devices

    PubMed Central

    Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods. PMID:28572695

  19. Evaluation of the use of automatic exposure control and automatic tube potential selection in low-dose cerebrospinal fluid shunt head CT.

    PubMed

    Wallace, Adam N; Vyhmeister, Ross; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; Ramirez-Giraldo, Juan Carlos; McKinstry, Robert C

    2015-06-01

    Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality.

  20. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  1. Assessment of WMATA's Automatic Fare Collection Equipment Performance

    DOT National Transportation Integrated Search

    1981-01-01

    The Washington Metropolitan Area Transit Authority (WMATA) has had an Automatic Fare Collection (AFC) system in operation since June 1977. The AFC system, comprised of entry/exit gates, farecard vendors, and addfare machines, initially encountered ma...

  2. Fall Risk Assessment Through Automatic Combination of Clinical Fall Risk Factors and Body-Worn Sensor Data.

    PubMed

    Greene, Barry R; Redmond, Stephen J; Caulfield, Brian

    2017-05-01

    Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital

  3. Portfolio Assessment and Quality Teaching

    ERIC Educational Resources Information Center

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  4. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  5. Real Time Assessment of Potable Water Quality in Distribution Network based on Low Cost Multi-Sensor Array

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Khatri, Punit

    2018-03-01

    New concepts and techniques are replacing traditional methods of water quality parameters measurement systems. This paper proposed a new way of potable water quality assessment in distribution network using Multi Sensor Array (MSA). Extensive research suggests that following parameters i.e. pH, Dissolved Oxygen (D.O.), Conductivity, Oxygen Reduction Potential (ORP), Temperature and Salinity are most suitable to detect overall quality of potable water. Commonly MSA is not an integrated sensor array on some substrate, but rather comprises a set of individual sensors measuring simultaneously different water parameters all together. Based on research, a MSA has been developed followed by signal conditioning unit and finally, an algorithm for easy user interfacing. A dedicated part of this paper also discusses the platform design and significant results. The Objective of this proposed research is to provide simple, efficient, cost effective and socially acceptable means to detect and analyse water bodies regularly and automatically.

  6. Automated assessment of cognitive health using smart home technologies.

    PubMed

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen; Parsey, Carolyn

    2013-01-01

    The goal of this work is to develop intelligent systems to monitor the wellbeing of individuals in their home environments. This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve=0.80, g-mean=0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained.

  7. Automatically Scoring Short Essays for Content. CRESST Report 836

    ERIC Educational Resources Information Center

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…

  8. Inline roasting hyphenated with gas chromatography-mass spectrometry as an innovative approach for assessment of cocoa fermentation quality and aroma formation potential.

    PubMed

    Van Durme, Jim; Ingels, Isabel; De Winne, Ann

    2016-08-15

    Today, the cocoa industry is in great need of faster and robust analytical techniques to objectively assess incoming cocoa quality. In this work, inline roasting hyphenated with a cooled injection system coupled to a gas chromatograph-mass spectrometer (ILR-CIS-GC-MS) has been explored for the first time to assess fermentation quality and/or overall aroma formation potential of cocoa. This innovative approach resulted in the in-situ formation of relevant cocoa aroma compounds. After comparison with data obtained by headspace solid phase micro extraction (HS-SPME-GC-MS) on conventional roasted cocoa beans, ILR-CIS-GC-MS data on unroasted cocoa beans showed similar formation trends of important cocoa aroma markers as a function of fermentation quality. The latter approach only requires small aliquots of unroasted cocoa beans, can be automatated, requires no sample preparation, needs relatively short analytical times (<1h) and is highly reproducible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Automatic Item Generation of Probability Word Problems

    ERIC Educational Resources Information Center

    Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina

    2009-01-01

    Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems…

  10. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. New conversion factors between human and automatic readouts of the CDMAM phantom for CR systems

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Homolka, Peter; Osanna-Elliot, Angelika; Kaar, Marcus; Semtrus, Friedrich; Figl, Michael

    2016-03-01

    Mammography screenings demand for profound image quality (IQ) assessment to guarantee their screening success. The European protocol for the quality control of the physical and technical aspects of mammography screening (EPQCM) suggests a contrast detail phantom such as the CDMAM phantom to evaluate IQ. For automatic evaluation a software is provided by the EUREF. As human and automatic readouts differ systematically conversion factors were published by the official reference organisation (EUREF). As we experienced a significant difference for these factors for Computed Radiography (CR) systems we developed an objectifying analysis software which presents the cells including the gold disks randomly in thickness and rotation. This allows to overcome the problem of an inevitable learning effect where observers know the position of the disks in advance. Applying this software, 45 computed radiography (CR) systems were evaluated and the conversion factors between human and automatic readout determined. The resulting conversion factors were compared with the ones resulting from the two methods published by EUREF. We found our conversion factors to be substantially lower than those suggested by EUREF, in particular 1.21 compared to 1.42 (EUREF EU method) and 1.62 (EUREF UK method) for 0.1 mm, and 1.40 compared to 1.73 (EUREF EU) and 1.83 (EUREF UK) for 0.25 mm disc diameter, respectively. This can result in a dose increase of up to 90% using either of these factors to adjust patient dose in order to fulfill image quality requirements. This suggests the need of an agreement on their proper application and limits the validity of the assessment methods. Therefore, we want to stress the need for clear criteria for CR systems based on appropriate studies.

  12. Clinical evaluation of new automatic coronary-specific best cardiac phase selection algorithm for single-beat coronary CT angiography.

    PubMed

    Wang, Hui; Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua

    2017-01-01

    The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare-SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation.

  13. Performance Assessment Examples from the Quality Performance Assessment Network

    ERIC Educational Resources Information Center

    Kuriacose, Christina

    2017-01-01

    In this brief article, Christina Kuriacose provides four sample performance assessments. Spanning grade levels, these assessments are strong examples of teacher-developed performance assessments from schools within the Center for Collaborative Education's Quality Performance Assessment network. These performance tasks demonstrate the pedagogical…

  14. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  15. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  16. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly

  17. Automatic detection of DNA double strand breaks after irradiation using an γH2AX assay.

    PubMed

    Hohmann, Tim; Kessler, Jacqueline; Grabiec, Urszula; Bache, Matthias; Vordermark, Dyrk; Dehghani, Faramarz

    2018-05-01

    Radiation therapy belongs to the most common approaches for cancer therapy leading amongst others to DNA damage like double strand breaks (DSB). DSB can be used as a marker for the effect of radiation on cells. For visualization and assessing the extent of DNA damage the γH2AX foci assay is frequently used. The analysis of the γH2AX foci assay remains complicated as the number of γH2AX foci has to be counted. The quantification is mostly done manually, being time consuming and leading to person-dependent variations. Therefore, we present a method to automatically analyze the number of foci inside nuclei, facilitating and quickening the analysis of DSBs with high reliability in fluorescent images. First nuclei were detected in fluorescent images. Afterwards, the nuclei were analyzed independently from each other with a local thresholding algorithm. This approach allowed accounting for different levels of noise and detection of the foci inside the respective nucleus, using Hough transformation searching for circles. The presented algorithm was able to correctly classify most foci in cases of "high" and "average" image quality (sensitivity>0.8) with a low rate of false positive detections (positive predictive value (PPV)>0.98). In cases of "low" image quality the approach had a decreased sensitivity (0.7-0.9), depending on the manual control counter. The PPV remained high (PPV>0.91). Compared to other automatic approaches the presented algorithm had a higher sensitivity and PPV. The used automatic foci detection algorithm was capable of detecting foci with high sensitivity and PPV. Thus it can be used for automatic analysis of images of varying quality.

  18. Blind image quality assessment based on aesthetic and statistical quality-aware features

    NASA Astrophysics Data System (ADS)

    Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi

    2017-07-01

    The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.

  19. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  20. Automatic method of analysis of OCT images in the assessment of the tooth enamel surface after orthodontic treatment with fixed braces

    PubMed Central

    2014-01-01

    Introduction Fixed orthodontic appliances, despite years of research and development, still raise a lot of controversy because of its potentially destructive influence on enamel. Therefore, it is necessary to quantitatively assess the condition and therein the thickness of tooth enamel in order to select the appropriate orthodontic bonding and debonding methodology as well as to assess the quality of enamel after treatment and clean-up procedure in order to choose the most advantageous course of treatment. One of the assessment methods is computed tomography where the measurement of enamel thickness and the 3D reconstruction of image sequences can be performed fully automatically. Material and method OCT images of 180 teeth were obtained from the Topcon 3D OCT-2000 camera. The images were obtained in vitro by performing sequentially 7 stages of treatment on all the teeth: before any interference into enamel, polishing with orthodontic paste, etching and application of a bonding system, orthodontic bracket bonding, orthodontic bracket removal, cleaning off adhesive residue. A dedicated method for the analysis and processing of images involving median filtering, mathematical morphology, binarization, polynomial approximation and the active contour method has been proposed. Results The obtained results enable automatic measurement of tooth enamel thickness in 5 seconds using the Core i5 CPU M460 @ 2.5GHz 4GB RAM. For one patient, the proposed method of analysis confirms enamel thickness loss of 80 μm (from 730 ± 165 μm to 650 ± 129 μm) after polishing with paste, enamel thickness loss of 435 μm (from 730 ± 165 μm to 295 ± 55 μm) after etching and bonding resin application, growth of a layer having a thickness of 265 μm (from 295 ± 55 μm to 560 ± 98 μm after etching) which is the adhesive system. After removing an orthodontic bracket, the adhesive residue was 105 μm and after cleaning it off, the enamel thickness was

  1. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation

  2. Image quality assessment using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  3. Operational CryoSat Product Quality Assessment

    NASA Astrophysics Data System (ADS)

    Mannan, Rubinder; Webb, Erica; Hall, Amanda; Bouzinac, Catherine

    2013-12-01

    The performance and quality of the CryoSat data products are routinely assessed by the Instrument Data quality Evaluation and Analysis Service (IDEAS). This information is then conveyed to the scientific and user community in order to allow them to utilise CryoSat data with confidence. This paper presents details of the Quality Control (QC) activities performed for CryoSat products under the IDEAS contract. Details of the different QC procedures and tools deployed by IDEAS to assess the quality of operational data are presented. The latest updates to the Instrument Processing Facility (IPF) for the Fast Delivery Marine (FDM) products and the future update to Baseline-C are discussed.

  4. Water-quality assessment of south-central Texas : comparison of water quality in surface-water samples collected manually and by automated samplers

    USGS Publications Warehouse

    Ging, Patricia B.

    1999-01-01

    Surface-water sampling protocols of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program specify samples for most properties and constituents to be collected manually in equal-width increments across a stream channel and composited for analysis. Single-point sampling with an automated sampler (autosampler) during storms was proposed in the upper part of the South-Central Texas NAWQA study unit, raising the question of whether property and constituent concentrations from automatically collected samples differ significantly from those in samples collected manually. Statistical (Wilcoxon signed-rank test) analyses of 3 to 16 paired concentrations for each of 26 properties and constituents from water samples collected using both methods at eight sites in the upper part of the study unit indicated that there were no significant differences in concentrations for dissolved constituents, other than calcium and organic carbon.

  5. Automated Assessment of Cognitive Health Using Smart Home Technologies

    PubMed Central

    Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen; Parsey, Carolyn

    2014-01-01

    BACKGROUND The goal of this work is to develop intelligent systems to monitor the well being of individuals in their home environments. OBJECTIVE This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. METHODS This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. RESULTS Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve = 0.80, g-mean = 0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. CONCLUSIONS The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained. PMID:23949177

  6. Towards Quality Assessment in an EFL Programme

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  7. Data Quality Verification at STScI - Automated Assessment and Your Data

    NASA Astrophysics Data System (ADS)

    Dempsey, R.; Swade, D.; Scott, J.; Hamilton, F.; Holm, A.

    1996-12-01

    As satellite based observatories improve their ability to deliver wider varieties and more complex types of scientific data, so to does the process of analyzing and reducing these data. It becomes correspondingly imperative that Guest Observers or Archival Researchers have access to an accurate, consistent, and easily understandable summary of the quality of their data. Previously, at the STScI, an astronomer would display and examine the quality and scientific usefulness of every single observation obtained with HST. Recently, this process has undergone a major reorganization at the Institute. A major part of the new process is that the majority of data are assessed automatically with little or no human intervention. As part of routine processing in the OSS--PODPS Unified System (OPUS), the Observatory Monitoring System (OMS) observation logs, the science processing trailer file (also known as the TRL file), and the science data headers are inspected by an automated tool, AUTO_DQ. AUTO_DQ then determines if any anomalous events occurred during the observation or through processing and calibration of the data that affects the procedural quality of the data. The results are placed directly into the Procedural Data Quality (PDQ) file as a string of predefined data quality keywords and comments. These in turn are used by the Contact Scientist (CS) to check the scientific usefulness of the observations. In this manner, the telemetry stream is checked for known problems such as losses of lock, re-centerings, or degraded guiding, for example, while missing data or calibration errors are also easily flagged. If the problem is serious, the data are then queued for manual inspection by an astronomer. The success of every target acquisition is verified manually. If serious failures are confirmed, the PI and the scheduling staff are notified so that options concerning rescheduling the observations can be explored.

  8. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  9. NATIONAL WATER-QUALITY ASSESSMENT (NAWQA) PROGRAM

    EPA Science Inventory

    The National Water-Quality Assessment (NAWQA) Program is designed to describe the status and trends in the quality of the Nations ground- and surface-water resources and to provide a sound understanding of the natural and human factors that affect the quality of these resources. ...

  10. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    PubMed

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  11. Application of newly developed Fluoro-QC software for image quality evaluation in cardiac X-ray systems.

    PubMed

    Oliveira, M; Lopez, G; Geambastiani, P; Ubeda, C

    2018-05-01

    A quality assurance (QA) program is a valuable tool for the continuous production of optimal quality images. The aim of this paper is to assess a newly developed automatic computer software for image quality (IR) evaluation in fluoroscopy X-ray systems. Test object images were acquired using one fluoroscopy system, Siemens Axiom Artis model (Siemens AG, Medical Solutions Erlangen, Germany). The software was developed as an ImageJ plugin. Two image quality parameters were assessed: high-contrast spatial resolution (HCSR) and signal-to-noise ratio (SNR). The time between manual and automatic image quality assessment procedures were compared. The paired t-test was used to assess the data. p Values of less than 0.05 were considered significant. The Fluoro-QC software generated faster IQ evaluation results (mean = 0.31 ± 0.08 min) than manual procedure (mean = 4.68 ± 0.09 min). The mean difference between techniques was 4.36 min. Discrepancies were identified in the region of interest (ROI) areas drawn manually with evidence of user dependence. The new software presented the results of two tests (HCSR = 3.06, SNR = 5.17) and also collected information from the DICOM header. Significant differences were not identified between manual and automatic measures of SNR (p value = 0.22) and HCRS (p value = 0.46). The Fluoro-QC software is a feasible, fast and free to use method for evaluating imaging quality parameters on fluoroscopy systems. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  12. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  13. Developing a multidisciplinary robotic surgery quality assessment program.

    PubMed

    Gonsenhauser, Iahn; Abaza, Ronney; Mekhjian, Hagop; Moffatt-Bruce, Susan D

    2012-01-01

    The objective of this study was to test the feasibility of a novel quality-improvement (QI) program designed to incorporate multiple robotic surgical sub-specialties in one health care system. A robotic surgery quality assessment program was developed by The Ohio State University College of Medicine (OSUMC) in conjunction with The Ohio State University Medical Center Quality Improvement and Operations Department. A retrospective review of cases was performed using data interrogated from the OSUMC Information Warehouse from January 2007 through August 2009. Robotic surgery cases (n=2200) were assessed for operative times, length of stay (LOS), conversions, returns to surgery, readmissions and cancellations as potential quality indicators. An actionable and reproducible framework for the quality measurement and assessment of a multidisciplinary and interdepartmental robotic surgery program was successfully completed demonstrating areas for improvement opportunities. This report supports that standard quality indicators can be applied to multiple specialties within a health care system to develop a useful quality tracking and assessment tool in the highly specialized area of robotic surgery. © 2012 National Association for Healthcare Quality.

  14. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  15. Soil quality assessment using weighted fuzzy association rules

    USGS Publications Warehouse

    Xue, Yue-Ju; Liu, Shu-Guang; Hu, Yue-Ming; Yang, Jing-Feng

    2010-01-01

    Fuzzy association rules (FARs) can be powerful in assessing regional soil quality, a critical step prior to land planning and utilization; however, traditional FARs mined from soil quality database, ignoring the importance variability of the rules, can be redundant and far from optimal. In this study, we developed a method applying different weights to traditional FARs to improve accuracy of soil quality assessment. After the FARs for soil quality assessment were mined, redundant rules were eliminated according to whether the rules were significant or not in reducing the complexity of the soil quality assessment models and in improving the comprehensibility of FARs. The global weights, each representing the importance of a FAR in soil quality assessment, were then introduced and refined using a gradient descent optimization method. This method was applied to the assessment of soil resources conditions in Guangdong Province, China. The new approach had an accuracy of 87%, when 15 rules were mined, as compared with 76% from the traditional approach. The accuracy increased to 96% when 32 rules were mined, in contrast to 88% from the traditional approach. These results demonstrated an improved comprehensibility of FARs and a high accuracy of the proposed method.

  16. pySeismicDQA: open source post experiment data quality assessment and processing

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin

    2017-04-01

    Seismic Data Quality Assessment is python based, open source set of tools dedicated for data processing after passive seismic experiments. Primary goal of this toolset is unification of data types and formats from different dataloggers necessary for further processing. This process requires additional data checks for errors, equipment malfunction, data format errors, abnormal noise levels, etc. In all such cases user needs to decide (manually or by automatic threshold) if data is removed from output dataset. Additionally, output dataset can be visualized in form of website with data availability charts and waveform visualization with earthquake catalog (external). Data processing can be extended with simple STA/LTA event detection. pySeismicDQA is designed and tested for two passive seismic experiments in central Europe: PASSEQ 2006-2008 and "13 BB Star" (2013-2016). National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  17. MARZ: Manual and automatic redshifting software

    NASA Astrophysics Data System (ADS)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  18. Institutional Consequences of Quality Assessment

    ERIC Educational Resources Information Center

    Joao Rosa, Maria; Tavares, Diana; Amaral, Alberto

    2006-01-01

    This paper analyses the opinions of Portuguese university rectors and academics on the quality assessment system and its consequences at the institutional level. The results obtained show that university staff (rectors and academics, with more of the former than the latter) held optimistic views of the positive consequences of quality assessment…

  19. Maximising Confidence in Assessment Decision-Making: A Springboard to Quality in Assessment.

    ERIC Educational Resources Information Center

    Clayton, Berwyn; Booth, Robin; Roy, Sue

    The introduction of training packages has focused attention on the quality of assessment in the Australian vocational education and training (VET) sector on the quality of assessment. For the process of mutual recognition under the Australian Recognition Framework (ARF) to work effectively, there needs to be confidence in assessment decisions made…

  20. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    USGS Publications Warehouse

    Arnold, Terri L.; Desimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, MaryLynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  1. Assessing Assessment Quality: Criteria for Quality Assurance in Design of (Peer) Assessment for Learning--A Review of Research Studies

    ERIC Educational Resources Information Center

    Tillema, Harm; Leenknecht, Martijn; Segers, Mien

    2011-01-01

    The interest in "assessment for learning" (AfL) has resulted in a search for new modes of assessment that are better aligned to students' learning how to learn. However, with the introduction of new assessment tools, also questions arose with respect to the quality of its measurement. On the one hand, the appropriateness of traditional,…

  2. Assessing Quality in Home Visiting Programs

    ERIC Educational Resources Information Center

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  3. Validity of radiographic assessment of the knee joint space using automatic image analysis.

    PubMed

    Komatsu, Daigo; Hasegawa, Yukiharu; Kojima, Toshihisa; Seki, Taisuke; Ikeuchi, Kazuma; Takegami, Yasuhiko; Amano, Takafumi; Higuchi, Yoshitoshi; Kasai, Takehiro; Ishiguro, Naoki

    2016-09-01

    The present study investigated whether there were differences between automatic and manual measurements of the minimum joint space width (mJSW) on knee radiographs. Knee radiographs of 324 participants in a systematic health screening were analyzed using the following three methods: manual measurement of film-based radiographs (Manual), manual measurement of digitized radiographs (Digital), and automatic measurement of digitized radiographs (Auto). The mean mJSWs on the medial and lateral sides of the knees were determined using each method, and measurement reliability was evaluated using intra-class correlation coefficients. Measurement errors were compared between normal knees and knees with radiographic osteoarthritis. All three methods demonstrated good reliability, although the reliability was slightly lower with the Manual method than with the other methods. On the medial and lateral sides of the knees, the mJSWs were the largest in the Manual method and the smallest in the Auto method. The measurement errors of each method were significantly larger for normal knees than for radiographic osteoarthritis knees. The mJSW measurements are more accurate and reliable with the Auto method than with the Manual or Digital method, especially for normal knees. Therefore, the Auto method is ideal for the assessment of the knee joint space.

  4. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a) The....1291. (b) The postanalytic systems quality assessment must include a review of the effectiveness of...

  5. Rating methodological quality: toward improved assessment and investigation.

    PubMed

    Moyer, Anne; Finney, John W

    2005-01-01

    Assessing methodological quality is considered essential in deciding what investigations to include in research syntheses and in detecting potential sources of bias in meta-analytic results. Quality assessment is also useful in characterizing the strengths and limitations of the research in an area of study. Although numerous instruments to measure research quality have been developed, they have lacked empirically-supported components. In addition, different summary quality scales have yielded different findings when they were used to weight treatment effect estimates for the same body of research. Suggestions for developing improved quality instruments include: distinguishing distinct domains of quality, such as internal validity, external validity, the completeness of the study report, and adherence to ethical practices; focusing on individual aspects, rather than domains of quality; and focusing on empirically-verified criteria. Other ways to facilitate the constructive use of quality assessment are to improve and standardize the reporting of research investigations, so that the quality of studies can be more equitably and thoroughly compared, and to identify optimal methods for incorporating study quality ratings into meta-analyses.

  6. Correlation of contrast-detail analysis and clinical image quality assessment in chest radiography with a human cadaver study.

    PubMed

    De Crop, An; Bacher, Klaus; Van Hoof, Tom; Smeets, Peter V; Smet, Barbara S; Vergauwen, Merel; Kiendys, Urszula; Duyck, Philippe; Verstraete, Koenraad; D'Herde, Katharina; Thierens, Hubert

    2012-01-01

    To determine the correlation between the clinical and physical image quality of chest images by using cadavers embalmed with the Thiel technique and a contrast-detail phantom. The use of human cadavers fulfilled the requirements of the institutional ethics committee. Clinical image quality was assessed by using three human cadavers embalmed with the Thiel technique, which results in excellent preservation of the flexibility and plasticity of organs and tissues. As a result, lungs can be inflated during image acquisition to simulate the pulmonary anatomy seen on a chest radiograph. Both contrast-detail phantom images and chest images of the Thiel-embalmed bodies were acquired with an amorphous silicon flat-panel detector. Tube voltage (70, 81, 90, 100, 113, 125 kVp), copper filtration (0.1, 0.2, 0.3 mm Cu), and exposure settings (200, 280, 400, 560, 800 speed class) were altered to simulate different quality levels. Four experienced radiologists assessed the image quality by using a visual grading analysis (VGA) technique based on European Quality Criteria for Chest Radiology. The phantom images were scored manually and automatically with use of dedicated software, both resulting in an inverse image quality figure (IQF). Spearman rank correlations between inverse IQFs and VGA scores were calculated. A statistically significant correlation (r = 0.80, P < .01) was observed between the VGA scores and the manually obtained inverse IQFs. Comparison of the VGA scores and the automated evaluated phantom images showed an even better correlation (r = 0.92, P < .001). The results support the value of contrast-detail phantom analysis for evaluating clinical image quality in chest radiography. © RSNA, 2011.

  7. Quality Assessment in Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Jeffrey M.; Das, Prajnan, E-mail: prajdas@mdanderson.org

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involvesmore » many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.« less

  8. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...

  9. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a) The....1241 through 493.1242. (b) The preanalytic systems quality assessment must include a review of the...

  10. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  11. Assessing quality in volcanic ash soils

    Treesearch

    Terry L. Craigg; Steven W. Howes

    2007-01-01

    Forest managers must understand how changes in soil quality resulting from project implementation affect long-term productivity and watershed health. Volcanic ash soils have unique properties that affect their quality and function; and which may warrant soil quality standards and assessment techniques that are different from other soils. We discuss the concept of soil...

  12. Orion Entry Handling Qualities Assessments

    NASA Technical Reports Server (NTRS)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  13. Assessing the Quality of Teachers' Teaching Practices

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Stephen; Staniszewski, Christina; Upton, Ashley; Valley, Megan

    2012-01-01

    This study assessed the extent to which nine elementary physical education teachers implemented the quality of teaching practices. Thirty physical education lessons taught by the nine teachers to their students in grades K-5 were videotaped. Four investigators coded the taped lessons using the Assessing Quality Teaching Rubric (AQTR) designed and…

  14. The Midwest Stream Quality Assessment

    USGS Publications Warehouse

    ,

    2012-01-01

    In 2013, the U.S. Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) and USGS Columbia Environmental Research Center (CERC) will be collaborating with the U.S. Environmental Protection Agency (EPA) National Rivers and Streams Assessment (NRSA) to assess stream quality across the Midwestern United States. The sites selected for this study are a subset of the larger NRSA, implemented by the EPA, States and Tribes to sample flowing waters across the United States (http://water.epa.gov/type/rsl/monitoring/riverssurvey/index.cfm). The goals are to characterize water-quality stressors—contaminants, nutrients, and sediment—and ecological conditions in streams throughout the Midwest and to determine the relative effects of these stressors on aquatic organisms in the streams. Findings will contribute useful information for communities and policymakers by identifying which human and environmental factors are the most critical in controlling stream quality. This collaborative study enhances information provided to the public and policymakers and minimizes costs by leveraging and sharing data gathered under existing programs. In the spring and early summer, NAWQA will sample streams weekly for contaminants, nutrients, and sediment. During the same time period, CERC will test sediment and water samples for toxicity, deploy time-integrating samplers, and measure reproductive effects and biomarkers of contaminant exposure in fish or amphibians. NRSA will sample sites once during the summer to assess ecological and habitat conditions in the streams by collecting data on algal, macroinvertebrate, and fish communities and collecting detailed physical-habitat measurements. Study-team members from all three programs will work in collaboration with USGS Water Science Centers and State agencies on study design, execution of sampling and analysis, and reporting.

  15. Automatically Detecting Likely Edits in Clinical Notes Created Using Automatic Speech Recognition

    PubMed Central

    Lybarger, Kevin; Ostendorf, Mari; Yetisgen, Meliha

    2017-01-01

    The use of automatic speech recognition (ASR) to create clinical notes has the potential to reduce costs associated with note creation for electronic medical records, but at current system accuracy levels, post-editing by practitioners is needed to ensure note quality. Aiming to reduce the time required to edit ASR transcripts, this paper investigates novel methods for automatic detection of edit regions within the transcripts, including both putative ASR errors but also regions that are targets for cleanup or rephrasing. We create detection models using logistic regression and conditional random field models, exploring a variety of text-based features that consider the structure of clinical notes and exploit the medical context. Different medical text resources are used to improve feature extraction. Experimental results on a large corpus of practitioner-edited clinical notes show that 67% of sentence-level edits and 45% of word-level edits can be detected with a false detection rate of 15%. PMID:29854187

  16. Automatic treatment planning facilitates fast generation of high-quality treatment plans for esophageal cancer.

    PubMed

    Hansen, Christian Rønn; Nielsen, Morten; Bertelsen, Anders Smedegaard; Hazell, Irene; Holtved, Eva; Zukauskaite, Ruta; Bjerregaard, Jon Kroll; Brink, Carsten; Bernchou, Uffe

    2017-11-01

    The quality of radiotherapy planning has improved substantially in the last decade with the introduction of intensity modulated radiotherapy. The purpose of this study was to analyze the plan quality and efficacy of automatically (AU) generated VMAT plans for inoperable esophageal cancer patients. Thirty-two consecutive inoperable patients with esophageal cancer originally treated with manually (MA) generated volumetric modulated arc therapy (VMAT) plans were retrospectively replanned using an auto-planning engine. All plans were optimized with one full 6MV VMAT arc giving 60 Gy to the primary target and 50 Gy to the elective target. The planning techniques were blinded before clinical evaluation by three specialized oncologists. To supplement the clinical evaluation, the optimization time for the AU plan was recorded along with DVH parameters for all plans. Upon clinical evaluation, the AU plan was preferred for 31/32 patients, and for one patient, there was no difference in the plans. In terms of DVH parameters, similar target coverage was obtained between the two planning methods. The mean dose for the spinal cord increased by 1.8 Gy using AU (p = .002), whereas the mean lung dose decreased by 1.9 Gy (p < .001). The AU plans were more modulated as seen by the increase of 12% in mean MUs (p = .001). The median optimization time for AU plans was 117 min. The AU plans were in general preferred and showed a lower mean dose to the lungs. The automation of the planning process generated esophageal cancer treatment plans quickly and with high quality.

  17. Assessing Pre-Service Teachers' Quality Teaching Practices

    ERIC Educational Resources Information Center

    Chen, Weiyun; Hendricks, Kristin; Archibald, Kelsi

    2011-01-01

    The purpose of this study was to design and validate the Assessing Quality Teaching Rubrics (AQTR) that assesses the pre-service teachers' quality teaching practices in a live lesson or a videotaped lesson. Twenty-one lessons taught by 13 Physical Education Teacher Education (PETE) students were videotaped. The videotaped lessons were evaluated…

  18. A portable automatic cough analyser in the ambulatory assessment of cough.

    PubMed

    Krajnik, Malgorzata; Damps-Konstanska, Iwona; Gorska, Lucyna; Jassem, Ewa

    2010-03-14

    Cough is one of the main symptoms of advanced lung disease. However, the efficacy of currently available treatment remains unsatisfactory. Research into the new antitussives requires an objective assessment of cough. The aim of the study was to test the feasibility of a new automatic portable cough analyser and assess the correlation between subjective and objective evaluations of cough in 13 patients with chronic cough. The patients' individual histories, a cough symptom score and a numeric cough scale (1-10) were used as a subjective evaluation of cough and a computerized audio-timed recorder was used to measure the frequency of coughing. The pre-clinical validation has shown that an automated cough analyser is an accurate and reliable tool for the ambulatory assessment of chronic cough. In the clinical part of the experiment for the daytime, subjective cough scoring correlated with the number of all cough incidents recorded by the cough analyser (r = 0.63; p = 0.022) and the number of cough incidents per hour (r = 0.60; p = 0.03). However, there was no relation between cough score and the time spent coughing per hour (r = 0.48; p = 0.1). As assessed for the night-time period, no correlation was found between subjective cough scoring and the number of incidents per hour (r = 0.29; p = 0.34) or time spent coughing (r = 0.26; p = 0.4). An automated cough analyser seems to be a feasible tool for the ambulatory monitoring of cough. There is a moderate correlation between subjective and objective assessments of cough during the daytime, whereas the discrepancy in the evaluation of night-time coughing might suggest that subjective analysis is unreliable.

  19. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  20. Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine.

    PubMed

    Saha, Sajib Kumar; Fernando, Basura; Cuadros, Jorge; Xiao, Di; Kanagasingam, Yogesan

    2018-04-27

    Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.

  1. An assessment model for quality management

    NASA Astrophysics Data System (ADS)

    Völcker, Chr.; Cass, A.; Dorling, A.; Zilioli, P.; Secchi, P.

    2002-07-01

    SYNSPACE together with InterSPICE and Alenia Spazio is developing an assessment method to determine the capability of an organisation in the area of quality management. The method, sponsored by the European Space Agency (ESA), is called S9kS (SPiCE- 9000 for SPACE). S9kS is based on ISO 9001:2000 with additions from the quality standards issued by the European Committee for Space Standardization (ECSS) and ISO 15504 - Process Assessments. The result is a reference model that supports the expansion of the generic process assessment framework provided by ISO 15504 to nonsoftware areas. In order to be compliant with ISO 15504, requirements from ISO 9001 and ECSS-Q-20 and Q-20-09 have been turned into process definitions in terms of Purpose and Outcomes, supported by a list of detailed indicators such as Practices, Work Products and Work Product Characteristics. In coordination with this project, the capability dimension of ISO 15504 has been revised to be consistent with ISO 9001. As contributions from ISO 9001 and the space quality assurance standards are separable, the stripped down version S9k offers organisations in all industries an assessment model based solely on ISO 9001, and is therefore interesting to all organisations, which intend to improve their quality management system based on ISO 9001.

  2. [Cardiology quality assessment in Germany--pro and contra].

    PubMed

    von Hodenberg, E; Eder, S; Grunebaum, P; Melichercik, J

    2009-10-01

    The German National Institute for Quality in Healthcare has also developed a program of external quality assessment in the field of cardiology. Hospitals are committed to collect certain data of diagnostic coronary angiography, percutaneous coronary interventions and pacemaker implantations. If statistical abnormalities are observed a so called structured dialogue is implemented. The responsible physicians of the hospitals are asked to comment possible quality deficits. Appointed members of quality commissions examine the answers and can invite the responsible physicians for interviews or also visit the hospital. However the validity of the quality data is problematic, because audits or check-ups of quality assessment in place are lacking. Therefore the results should not be misused for a comparison or ranking of hospitals with each other. As long as the validity of the quality assessment has not been improved, the results should also not be accessible for other parties, such as health insurances. Georg Thieme Verlag KG Stuttgart, New York.

  3. Total Quality Management of Information System for Quality Assessment of Pesantren Using Fuzzy-SERVQUAL

    NASA Astrophysics Data System (ADS)

    Faizah, Arbiati; Syafei, Wahyul Amien; Isnanto, R. Rizal

    2018-02-01

    This research proposed a model combining an approach of Total Quality Management (TQM) and Fuzzy method of Service Quality (SERVQUAL) to asses service quality. TQM implementation was as quality management orienting on customer's satisfaction by involving all stakeholders. SERVQUAL model was used to measure quality service based on five dimensions such as tangible, reliability, responsiveness, assurance, and empathy. Fuzzy set theory was to accommodate subjectivity and ambiguity of quality assessment. Input data consisted of indicator data and quality assessment aspect. Input data was, then, processed to be service quality assessment questionnaires of Pesantren by using Fuzzy method to get service quality score. This process consisted of some steps as follows : inputting dimension and questionnaire data to data base system, filling questionnaire through system, then, system calculated fuzzification, defuzzification, gap of quality expected and received by service receivers, and calculating each dimension rating showing quality refinement priority. Rating of each quality dimension was, then, displayed at dashboard system to enable users to see information. From system having been built, it could be known that tangible dimension had the highest gap, -0.399, thus it needs to be prioritized and gets evaluation and refinement action soon.

  4. Masked priming effects in aphasia: evidence of altered automatic spreading activation.

    PubMed

    Silkes, JoAnn P; Rogers, Margaret A

    2012-12-01

    Previous research has suggested that impairments of automatic spreading activation may underlie some aphasic language deficits. The current study further investigated the status of automatic spreading activation in individuals with aphasia as compared with typical adults. Participants were 21 individuals with aphasia (12 fluent, 9 nonfluent) and 31 typical adults. Reaction time data were collected on a lexical decision task with masked repetition primes, assessed at 11 different interstimulus intervals (ISIs). Masked primes were used to assess automatic spreading activation without the confound of conscious processing. The various ISIs were used to assess the time to onset and duration of priming effects. The control group showed maximal priming in the 200-ms ISI condition, with significant priming at a range of ISIs surrounding that peak. Participants with both fluent and nonfluent aphasia showed maximal priming effects in the 250-ms ISI condition and primed across a smaller range of ISIs than did the control group. Results suggest that individuals with aphasia have slowed automatic spreading activation and impaired maintenance of activation over time, regardless of fluency classification. These findings have implications for understanding aphasic language impairment and for development of aphasia treatments designed to directly address automatic language processes.

  5. FAMA: An automatic code for stellar parameter and abundance determination

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2013-10-01

    Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38

  6. Automatic mental associations predict future choices of undecided decision-makers.

    PubMed

    Galdi, Silvia; Arcuri, Luciano; Gawronski, Bertram

    2008-08-22

    Common wisdom holds that choice decisions are based on conscious deliberations of the available information about choice options. On the basis of recent insights about unconscious influences on information processing, we tested whether automatic mental associations of undecided individuals bias future choices in a manner such that these choices reflect the evaluations implied by earlier automatic associations. With the use of a computer-based, speeded categorization task to assess automatic mental associations (i.e., associations that are activated unintentionally, difficult to control, and not necessarily endorsed at a conscious level) and self-report measures to assess consciously endorsed beliefs and choice preferences, automatic associations of undecided participants predicted changes in consciously reported beliefs and future choices over a period of 1 week. Conversely, for decided participants, consciously reported beliefs predicted changes in automatic associations and future choices over the same period. These results indicate that decision-makers sometimes have already made up their mind at an unconscious level, even when they consciously indicate that they are still undecided.

  7. Automatic identification of cochlear implant electrode arrays for post-operative assessment

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Schuman, Theodore A.; Wright, Charles G.; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    Cochlear implantation is a procedure performed to treat profound hearing loss. Accurately determining the postoperative position of the implant in vivo would permit studying the correlations between implant position and hearing restoration. To solve this problem, we present an approach based on parametric Gradient Vector Flow snakes to segment the electrode array in post-operative CT. By combining this with existing methods for localizing intra-cochlear anatomy, we have developed a system that permits accurate assessment of the implant position in vivo. The system is validated using a set of seven temporal bone specimens. The algorithms were run on pre- and post-operative CTs of the specimens, and the results were compared to histological images. It was found that the position of the arrays observed in the histological images is in excellent agreement with the position of their automatically generated 3D reconstructions in the CT scans.

  8. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  9. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    PubMed

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  10. Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.

    PubMed

    Brand, Ralf; Antoniewicz, Franziska

    2016-12-01

    Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.

  11. Automatically-computed prehospital severity scores are equivalent to scores based on medic documentation.

    PubMed

    Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques

    2008-10-01

    Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.

  12. flowAI: automatic and interactive anomaly discerning tools for flow cytometry data.

    PubMed

    Monaco, Gianni; Chen, Hao; Poidinger, Michael; Chen, Jinmiao; de Magalhães, João Pedro; Larbi, Anis

    2016-08-15

    Flow cytometry (FCM) is widely used in both clinical and basic research to characterize cell phenotypes and functions. The latest FCM instruments analyze up to 20 markers of individual cells, producing high-dimensional data. This requires the use of the latest clustering and dimensionality reduction techniques to automatically segregate cell sub-populations in an unbiased manner. However, automated analyses may lead to false discoveries due to inter-sample differences in quality and properties. We present an R package, flowAI, containing two methods to clean FCM files from unwanted events: (i) an automatic method that adopts algorithms for the detection of anomalies and (ii) an interactive method with a graphical user interface implemented into an R shiny application. The general approach behind the two methods consists of three key steps to check and remove suspected anomalies that derive from (i) abrupt changes in the flow rate, (ii) instability of signal acquisition and (iii) outliers in the lower limit and margin events in the upper limit of the dynamic range. For each file analyzed our software generates a summary of the quality assessment from the aforementioned steps. The software presented is an intuitive solution seeking to improve the results not only of manual but also and in particular of automatic analysis on FCM data. R source code available through Bioconductor: http://bioconductor.org/packages/flowAI/ CONTACTS: mongianni1@gmail.com or Anis_Larbi@immunol.a-star.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  14. The Development of a Web-Based Assessment System to Identify Students' Misconception Automatically on Linear Kinematics with a Four-Tier Instrument Test

    ERIC Educational Resources Information Center

    Pujayanto, Pujayanto; Budiharti, Rini; Adhitama, Egy; Nuraini, Niken Rizky Amalia; Putri, Hanung Vernanda

    2018-01-01

    This research proposes the development of a web-based assessment system to identify students' misconception. The system, named WAS (web-based assessment system), can identify students' misconception profile on linear kinematics automatically after the student has finished the test. The test instrument was developed and validated. Items were…

  15. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  16. Quality Assessment of Internationalised Studies: Theory and Practice

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  17. Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.

    PubMed

    Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si

    2017-07-01

    Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.

  18. Application of a newly developed software program for image quality assessment in cone-beam computed tomography.

    PubMed

    de Oliveira, Marcus Vinicius Linhares; Santos, António Carvalho; Paulo, Graciano; Campos, Paulo Sergio Flores; Santos, Joana

    2017-06-01

    The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT.

  19. Assessing the effects of automatically delivered stimulation on the use of simple exercise tools by students with multiple disabilities.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop

    2003-01-01

    We assessed the effects of automatically delivered stimulation on the activity level and mood (indices of happiness) of three students with multiple disabilities during their use of a stepper and a stationary bicycle. The stimulation involved a pool of favorite stimulus events that were delivered automatically, through an electronic control system, while the students were active in using the aforementioned exercise tools. Data showed that stimulation had an overall positive impact, but this was not evident on both measures (i.e., level of activity and indices of happiness) or with both exercise tools across students. These findings are discussed in relation to the outcome of an earlier study in the area by the same authors and in terms of practical implications for daily contexts.

  20. Informatics: essential infrastructure for quality assessment and improvement in nursing.

    PubMed Central

    Henry, S B

    1995-01-01

    In recent decades there have been major advances in the creation and implementation of information technologies and in the development of measures of health care quality. The premise of this article is that informatics provides essential infrastructure for quality assessment and improvement in nursing. In this context, the term quality assessment and improvement comprises both short-term processes such as continuous quality improvement (CQI) and long-term outcomes management. This premise is supported by 1) presentation of a historical perspective on quality assessment and improvement; 2) delineation of the types of data required for quality assessment and improvement; and 3) description of the current and potential uses of information technology in the acquisition, storage, transformation, and presentation of quality data, information, and knowledge. PMID:7614118

  1. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  2. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  3. Quality assessment tools add value.

    PubMed

    Paul, L

    1996-10-01

    The rapid evolution of the health care marketplace can be expected to continue as we move closer to the 21st Century. Externally-imposed pressures for cost reduction will increasingly be accompanied by pressure within health care organizations as risk-sharing reimbursement arrangements become more commonplace. Competitive advantage will be available to those organizations that can demonstrate objective value as defined by the cost-quality equation. The tools an organization chooses to perform quality assessment will be an important factor in its ability to demonstrate such value. Traditional quality assurance will in all likelihood continue, but the extent to which quality improvement activities are adopted by the culture of an organization may determine its ability to provide objective evidence of better health status outcomes.

  4. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  5. Automatic optical inspection system design for golf ball

    NASA Astrophysics Data System (ADS)

    Wu, Hsien-Huang; Su, Jyun-Wei; Chen, Chih-Lin

    2016-09-01

    ith the growing popularity of golf sport all over the world, the quantities of relevant products are increasing year by year. To create innovation and improvement in quality while reducing production cost, automation of manufacturing become a necessary and important issue. This paper reflect the trend of this production automa- tion. It uses the AOI (Automated Optical Inspection) technology to develop a system which can automatically detect defects on the golf ball. The current manual quality-inspection is not only error-prone but also very man- power demanding. Taking into consideration the competition of this industry in the near future, the development of related AOI equipment must be conducted as soon as possible. Due to the strong reflective property of the ball surface, as well as its surface dimples and subtle flaws, it is very difficult to take good quality image for automatic inspection. Based on the surface properties and shape of the ball, lighting has been properly design for image-taking environment and structure. Area-scan cameras have been used to acquire images with good contrast between defects and background to assure the achievement of the goal of automatic defect detection on the golf ball. The result obtained is that more than 973 of the NG balls have be detected, and system maintains less than 103 false alarm rate. The balls which are determined by the system to be NG will be inspected by human eye again. Therefore, the manpower spent in the inspection has been reduced by 903.

  6. Water-Quality Assessment of the High Plains Aquifer, 1999-2004

    USGS Publications Warehouse

    McMahon, Peter B.; Dennehy, Kevin F.; Bruce, Breton W.; Gurdak, Jason J.; Qi, Sharon L.

    2007-01-01

    Water quality of the High Plains aquifer was assessed for the period 1999-2004 as part of the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program. This effort represents the first systematic regional assessment of water quality in this nationally important aquifer. A stratified, nested group of studies was designed to assess linkages between the quality of water recharging the aquifer, the effect of transport through the hydrologic system on water quality, and the quality of the resource used for human consumption and agricultural applications. The stratified, nested design facilitated upscaling of monitoring results to unmonitored areas of the aquifer as well as upscaling of process understanding from local to regional scales.

  7. Masked Priming Effects in Aphasia: Evidence for Altered Automatic Spreading Activation

    PubMed Central

    Silkes, JoAnn P.; Rogers, Margaret A.

    2015-01-01

    Purpose Previous research has suggested that impairments of automatic spreading activation may underlie some aphasic language deficits. This study further investigated the status of automatic spreading activation in individuals with aphasia as compared with typical adults. Method Participants were 21 individuals with aphasia (12 fluent, 9 non-fluent) and 31 typical adults. Reaction time data were collected on a lexical decision task with masked repetition primes, assessed at 11 different interstimulus intervals (ISIs). Masked primes were used to assess automatic spreading activation without the confound of conscious processing. The various ISIs were used to assess the time to onset, and duration, of priming effects. Results The control group showed maximal priming in the 200 ms ISI condition, with significant priming at a range of ISIs surrounding that peak. Participants with both fluent and non-fluent aphasia showed maximal priming effects in the 250 ms ISI condition, and primed across a smaller range of ISIs than the control group. Conclusions Results suggest that individuals with aphasia have slowed automatic spreading activation, and impaired maintenance of activation over time, regardless of fluency classification. These findings have implications for understanding aphasic language impairment and for development of aphasia treatments designed directly address automatic language processes. PMID:22411281

  8. A review of image quality assessment methods with application to computational photography

    NASA Astrophysics Data System (ADS)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  9. Methodological Quality Assessment of Meta-Analyses of Hyperthyroidism Treatment.

    PubMed

    Qin, Yahong; Yao, Liang; Shao, Feifei; Yang, Kehu; Tian, Limin

    2018-01-01

    Hyperthyroidism is a common condition that is associated with increased morbidity and mortality. A number of meta-analyses (MAs) have assessed the therapeutic measures for hyperthyroidism, including antithyroid drugs, surgery, and radioiodine, however, the methodological quality has not been evaluated. This study evaluated the methodological quality and summarized the evidence obtained from MAs of hyperthyroidism treatments for radioiodine, antithyroid drugs, and surgery. We searched the PubMed, EMBASE, Cochrane Library, Web of Science, and Chinese Biomedical Literature Database databases. Two investigators independently assessed the meta-analyses titles and abstracts for inclusion. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. A total of 26 MAs fulfilled the inclusion criteria. Based on the AMSTAR scores, the average methodological quality was 8.31, with large variability ranging from 4 to 11. The methodological quality of English meta-analyses was better than that of Chinese meta-analyses. Cochrane reviews had better methodological quality than non-Cochrane reviews due to better study selection and data extraction, the inclusion of unpublished studies, and better reporting of study characteristics. The authors did not report conflicts of interest in 53.8% meta-analyses, and 19.2% did not report the harmful effects of treatment. Publication bias was not assessed in 38.5% of meta-analyses, and 19.2% did not report the follow-up time. Large-scale assessment of methodological quality of meta-analyses of hyperthyroidism treatment highlighted methodological strengths and weaknesses. Consideration of scientific quality when formulating conclusions should be made explicit. Future meta-analyses should improve on reporting conflict of interest. © Georg Thieme Verlag KG Stuttgart · New York.

  10. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    PubMed Central

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  11. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    NASA Astrophysics Data System (ADS)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  12. Automatic Temporal Tracking of Supra-Glacial Lakes

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Lv, Q.; Gallaher, D. W.; Fanning, D.

    2010-12-01

    During the recent years, supra-glacial lakes in Greenland have attracted extensive global attention as they potentially play an important role in glacier movement, sea level rise, and climate change. Previous works focused on classification methods and individual cloud-free satellite images, which have limited capabilities in terms of tracking changes of lakes over time. The challenges of tracking supra-glacial lakes automatically include (1) massive amount of satellite images with diverse qualities and frequent cloud coverage, and (2) diversity and dynamics of large number of supra-glacial lakes on the Greenland ice sheet. In this study, we develop an innovative method to automatically track supra-glacial lakes temporally using the Moderate Resolution Imaging Spectroradiometer (MODIS) time-series data. The method works for both cloudy and cloud-free data and is unsupervised, i.e., no manual identification is required. After selecting the highest-quality image within each time interval, our method automatically detects supra-glacial lakes in individual images, using adaptive thresholding to handle diverse image qualities. We then track lakes across time series of images as lakes appear, change in size, and disappear. Using multi-year MODIS data during melting season, we demonstrate that this new method can detect and track supra-glacial lakes in both space and time with 95% accuracy. Attached figure shows an example of the current result. Detailed analysis of the temporal variation of detected lakes will be presented. (a) One of our experimental data. The Investigated region is centered at Jakobshavn Isbrae glacier in west Greenland. (b) Enlarged view of part of ice sheet. It is partially cloudy and with supra-glacial lakes on it. Lakes are shown as dark spots. (c) Current result. Red spots are detected lakes.

  13. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  14. Academics' Perceptions on the Purposes of Quality Assessment

    ERIC Educational Resources Information Center

    Rosa, Maria J.; Sarrico, Claudia S.; Amaral, Alberto

    2012-01-01

    The accountability versus improvement debate is an old one. Although being traditionally considered dichotomous purposes of higher education quality assessment, some authors defend the need of balancing both in quality assessment systems. This article goes a step further and contends that not only they should be balanced but also that other…

  15. Automatically Grading Customer Confidence in a Formal Specification.

    ERIC Educational Resources Information Center

    Shukur, Zarina; Burke, Edmund; Foxley, Eric

    1999-01-01

    Describes an automatic grading system for a formal methods computer science course that is able to evaluate a formal specification written in the Z language. Quality is measured by considering first, specification correctness (syntax, semantics, and satisfaction of customer requirements), and second, specification maintainability (comparison of…

  16. A framework for assessing Health Economic Evaluation (HEE) quality appraisal instruments.

    PubMed

    Langer, Astrid

    2012-08-16

    Health economic evaluations support the health care decision-making process by providing information on costs and consequences of health interventions. The quality of such studies is assessed by health economic evaluation (HEE) quality appraisal instruments. At present, there is no instrument for measuring and improving the quality of such HEE quality appraisal instruments. Therefore, the objectives of this study are to establish a framework for assessing the quality of HEE quality appraisal instruments to support and improve their quality, and to apply this framework to those HEE quality appraisal instruments which have been subject to more scrutiny than others, in order to test the framework and to demonstrate the shortcomings of existing HEE quality appraisal instruments. To develop the quality assessment framework for HEE quality appraisal instruments, the experiences of using appraisal tools for clinical guidelines are used. Based on a deductive iterative process, clinical guideline appraisal instruments identified through literature search are reviewed, consolidated, and adapted to produce the final quality assessment framework for HEE quality appraisal instruments. The final quality assessment framework for HEE quality appraisal instruments consists of 36 items organized within 7 dimensions, each of which captures a specific domain of quality. Applying the quality assessment framework to four existing HEE quality appraisal instruments, it is found that these four quality appraisal instruments are of variable quality. The framework described in this study should be regarded as a starting point for appraising the quality of HEE quality appraisal instruments. This framework can be used by HEE quality appraisal instrument producers to support and improve the quality and acceptance of existing and future HEE quality appraisal instruments. By applying this framework, users of HEE quality appraisal instruments can become aware of methodological deficiencies

  17. Automatic stereotyping against people with schizophrenia, schizoaffective and affective disorders

    PubMed Central

    Rüsch, Nicolas; Corrigan, Patrick W.; Todd, Andrew R.; Bodenhausen, Galen V.

    2010-01-01

    Similar to members of the public, people with mental illness may exhibit general negative automatic prejudice against their own group. However, it is unclear whether more specific negative stereotypes are automatically activated among diagnosed individuals and how such automatic stereotyping may be related to self-reported attitudes and emotional reactions. We therefore studied automatically activated reactions toward mental illness among 85 people with schizophrenia, schizoaffective or affective disorders as well as among 50 members of the general public, using a Lexical Decision Task to measure automatic stereotyping. Deliberately endorsed attitudes and emotional reactions were assessed by self-report. Independent of diagnosis, people with mental illness showed less negative automatic stereotyping than did members of the public. Among members of the public, stronger automatic stereotyping was associated with more self-reported shame about a potential mental illness and more anger toward stigmatized individuals. Reduced automatic stereotyping in the diagnosed group suggests that people with mental illness might not entirely internalize societal stigma. Among members of the public, automatic stereotyping predicted negative emotional reactions to people with mental illness. Initiatives to reduce the impact of public stigma and internalized stigma should take automatic stereotyping and related emotional aspects of stigma into account. PMID:20843560

  18. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  19. CT-based patient modeling for head and neck hyperthermia treatment planning: manual versus automatic normal-tissue-segmentation.

    PubMed

    Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M

    2014-04-01

    Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Trabecular bone analysis in CT and X-ray images of the proximal femur for the assessment of local bone quality.

    PubMed

    Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer

    2009-10-01

    Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.

  1. Health on the Net Foundation: assessing the quality of health web pages all over the world.

    PubMed

    Boyer, Célia; Gaudinat, Arnaud; Baujard, Vincent; Geissbühler, Antoine

    2007-01-01

    The Internet provides a great amount of information and has become one of the communication media which is most widely used [1]. However, the problem is no longer finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens. In this paper, we assume that the quality of web pages can be controlled, even when a huge amount of documents has to be reviewed. But this must be supported by both specific automatic tools and human expertise. In this context, we present various initiatives of the Health on the Net Foundation informing the citizens about the reliability of the medical content on the web.

  2. Development and Validation of Assessing Quality Teaching Rubrics

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  3. Quality Assessment in the Blog Space

    ERIC Educational Resources Information Center

    Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan

    2010-01-01

    Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…

  4. Sensible organizations: technology and methodology for automatically measuring organizational behavior.

    PubMed

    Olguin Olguin, Daniel; Waber, Benjamin N; Kim, Taemie; Mohan, Akshay; Ara, Koji; Pentland, Alex

    2009-02-01

    We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks.

  5. Key Elements for Judging the Quality of a Risk Assessment

    PubMed Central

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  6. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  7. MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites

    PubMed Central

    2017-01-01

    Quality control of MRI is essential for excluding problematic acquisitions and avoiding bias in subsequent image processing and analysis. Visual inspection is subjective and impractical for large scale datasets. Although automated quality assessments have been demonstrated on single-site datasets, it is unclear that solutions can generalize to unseen data acquired at new sites. Here, we introduce the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier. Our tool can be run both locally and as a free online service via the OpenNeuro.org portal. The classifier is trained on a publicly available, multi-site dataset (17 sites, N = 1102). We perform model selection evaluating different normalization and feature exclusion approaches aimed at maximizing across-site generalization and estimate an accuracy of 76%±13% on new sites, using leave-one-site-out cross-validation. We confirm that result on a held-out dataset (2 sites, N = 265) also obtaining a 76% accuracy. Even though the performance of the trained classifier is statistically above chance, we show that it is susceptible to site effects and unable to account for artifacts specific to new sites. MRIQC performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement which might require more labeled data and new approaches to the between-site variability. Overcoming these limitations is crucial for a more objective quality assessment of neuroimaging data, and to enable the analysis of extremely large and multi-site samples. PMID:28945803

  8. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    PubMed

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  9. Evaluation of Automatic Vehicle Location accuracy

    DOT National Transportation Integrated Search

    1999-01-01

    This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...

  10. Published methodological quality of randomized controlled trials does not reflect the actual quality assessed in protocols

    PubMed Central

    Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P.; Kumar, Ambuj

    2011-01-01

    Objectives To assess whether reported methodological quality of randomized controlled trials (RCTs) reflect the actual methodological quality, and to evaluate the association of effect size (ES) and sample size with methodological quality. Study design Systematic review Setting Retrospective analysis of all consecutive phase III RCTs published by 8 National Cancer Institute Cooperative Groups until year 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Results 429 RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94, 95%CI: 0.88, 0.99) and 24% (RHR: 1.24, 95%CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. Conclusion The largest study to-date shows poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. PMID:22424985

  11. System for Automatic Generation of Examination Papers in Discrete Mathematics

    ERIC Educational Resources Information Center

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  12. Automatic tube potential selection with tube current modulation in coronary CT angiography: Can it achieve consistent image quality among various individuals?

    PubMed

    Wang, Xiao-Ping; Zhu, Xiao-Mei; Zhu, Yin-Su; Liu, Wang-Yan; Yang, Xiao-Han; Huang, Wei-Wei; Xu, Yi; Tang, Li-Jun

    2018-07-01

    The present study included a total of 111 consecutive patients who had undergone coronary computed tomography (CT) angiography, using a first-generation dual-source CT with automatic tube potential selection and tube current modulation. Body weight (BW) and body mass index (BMI) were recorded prior to CT examinations. Image noise and attenuation of the proximal ascending aorta (AA) and descending aorta (DA) at the middle level of the left ventricle were measured. Correlations between BW, BMI and objective image quality were evaluated using linear regression. In addition, two subgroups based on BMI (BMI ≤25 and >25 kg/m 2 ) were analyzed. Subjective image quality, image noise, the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) were all compared between those. The image noise of the AA increased with the BW and BMI (BW: r=0.453, P<0.001; BMI: r=0.545, P<0.001). The CNR and SNR of the AA were inversely correlated with BW and BMI, respectively. The image noise of the DA and the CNR and SNR of the DA exhibited a similar association to those with the BW or BMI. The BMI >25 kg/m 2 group had a significant increase in image noise (33.1±6.9 vs. 27.8±4.0 HU, P<0.05) and a significant reduction in CNR and SNR, when compared with those in the BMI ≤25 kg/m 2 group (CNR: 18.9±4.3 vs. 16.1±3.7, P<0.05; SNR: 16.0±3.8 vs. 13.6±3.2, P<0.05). Patients with a BMI of ≤25 kg/m 2 had more coronary artery segments scored as excellent, compared with patients with a BMI of >25 kg/m 2 (P=0.02). In conclusion, this method is not able to achieve a consistent objective image quality across the entire patient population. The impact of BW and BMI on objective image quality was not completely eliminated. BMI-based adjustment of the tube potential may achieve a more consistent image quality compared to automatic tube potential selection, particularly in patients with a larger body habitus.

  13. Detection technology research on the one-way clutch of automatic brake adjuster

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Luo, Zai; Lu, Yi

    2013-10-01

    In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.

  14. A framework for assessing Health Economic Evaluation (HEE) quality appraisal instruments

    PubMed Central

    2012-01-01

    Background Health economic evaluations support the health care decision-making process by providing information on costs and consequences of health interventions. The quality of such studies is assessed by health economic evaluation (HEE) quality appraisal instruments. At present, there is no instrument for measuring and improving the quality of such HEE quality appraisal instruments. Therefore, the objectives of this study are to establish a framework for assessing the quality of HEE quality appraisal instruments to support and improve their quality, and to apply this framework to those HEE quality appraisal instruments which have been subject to more scrutiny than others, in order to test the framework and to demonstrate the shortcomings of existing HEE quality appraisal instruments. Methods To develop the quality assessment framework for HEE quality appraisal instruments, the experiences of using appraisal tools for clinical guidelines are used. Based on a deductive iterative process, clinical guideline appraisal instruments identified through literature search are reviewed, consolidated, and adapted to produce the final quality assessment framework for HEE quality appraisal instruments. Results The final quality assessment framework for HEE quality appraisal instruments consists of 36 items organized within 7 dimensions, each of which captures a specific domain of quality. Applying the quality assessment framework to four existing HEE quality appraisal instruments, it is found that these four quality appraisal instruments are of variable quality. Conclusions The framework described in this study should be regarded as a starting point for appraising the quality of HEE quality appraisal instruments. This framework can be used by HEE quality appraisal instrument producers to support and improve the quality and acceptance of existing and future HEE quality appraisal instruments. By applying this framework, users of HEE quality appraisal instruments can become aware

  15. Automatic assessment of functional health decline in older adults based on smart home data.

    PubMed

    Alberdi Aramendi, Ane; Weakley, Alyssa; Aztiria Goenaga, Asier; Schmitter-Edgecombe, Maureen; Cook, Diane J

    2018-05-01

    In the context of an aging population, tools to help elderly to live independently must be developed. The goal of this paper is to evaluate the possibility of using unobtrusively collected activity-aware smart home behavioral data to automatically detect one of the most common consequences of aging: functional health decline. After gathering the longitudinal smart home data of 29 older adults for an average of >2 years, we automatically labeled the data with corresponding activity classes and extracted time-series statistics containing 10 behavioral features. Using this data, we created regression models to predict absolute and standardized functional health scores, as well as classification models to detect reliable absolute change and positive and negative fluctuations in everyday functioning. Functional health was assessed every six months by means of the Instrumental Activities of Daily Living-Compensation (IADL-C) scale. Results show that total IADL-C score and subscores can be predicted by means of activity-aware smart home data, as well as a reliable change in these scores. Positive and negative fluctuations in everyday functioning are harder to detect using in-home behavioral data, yet changes in social skills have shown to be predictable. Future work must focus on improving the sensitivity of the presented models and performing an in-depth feature selection to improve overall accuracy. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. A review of data quality assessment methods for public health information systems.

    PubMed

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-05-14

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.

  17. A Review of Data Quality Assessment Methods for Public Health Information Systems

    PubMed Central

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-01-01

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450

  18. International drug price comparisons: quality assessment.

    PubMed

    Machado, Márcio; O'Brodovich, Ryan; Krahn, Murray; Einarson, Thomas R

    2011-01-01

    To quantitatively summarize results (i.e., prices and affordability) reported from international drug price comparison studies and assess their methodological quality. A systematic search of the most relevant databases-Medline, Embase, International Pharmaceutical Abstracts (IPA), and Scopus, from their inception to May 2009-was conducted to identify original research comparing international drug prices. International drug price information was extracted and recorded from accepted papers. Affordability was reported as drug prices adjusted for income. Study quality was assessed using six criteria: use of similar countries, use of a representative sample of drugs, selection of specific types of prices, identification of drug packaging, different weights on price indices, and the type of currency conversion used. Of the 1 828 studies identified, 21 were included. Only one study adequately addressed all quality issues. A large variation in study quality was observed due to the many methods used to conduct the drug price comparisons, such as different indices, economic parameters, price types, basket of drugs, and more. Thus, the quality of published studies was considered poor. Results varied across studies, but generally, higher income countries had higher drug prices. However, after adjusting drug prices for affordability, higher income countries had more affordable prices than lower income countries. Differences between drug prices and affordability in different countries were found. Low income countries reported less affordability of drugs, leaving room for potential problems with drug access, and consequently, a negative impact on health. The quality of the literature on this topic needs improvement.

  19. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future.

  20. E-Services quality assessment framework for collaborative networks

    NASA Astrophysics Data System (ADS)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  1. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  2. Sleep-related automatism and the law.

    PubMed

    Ebrahim, Irshaad Osman; Fenwick, Peter

    2008-04-01

    Crimes carried out during or arising from sleep highlight many difficulties with our current law and forensic sleep medicine clinical practice. There is a need for clarity in the law and agreement between experts on a standardised form of assessment and diagnosis in these challenging cases. We suggest that the time has come for a standardised, internationally recognised diagnostic protocol to be set as a minimum standard in all cases of suspected sleep-related forensic cases. The protocol of a full medical history, sleep history, psychiatric history, neuropsychiatric and psychometric examination and electroencephalography (EEG), should be routine. It should now be mandatory to carry out routine polysomnography (PSG) to establish the presence of precipitating and modulating factors. Sleepwalking is classified as insane automatism in England and Wales and sudden arousal from sleep in a non-sleepwalker as sane automatism. The recent case in England of R v. Lowe (2005) highlights these anomalies. Moreover, the word insanity stigmatises sleepwalkers and should be dropped. The simplest solution to these problems would be for the law to be changed so that there is only one category of defence for all sleep-related offences--not guilty by reason of sleep disorder. This was rejected by the House of Lords for cases of automatism due to epilepsy, and is likely to be rejected for sleepwalkers. Removing the categories of automatism (sane or insane) would be the best solution. Risk assessment is already standard practice in the UK and follow up, subsequent to disposal, by approved specialists should become part of the sentencing process. This will provide support for the defendant and protection of the public.

  3. Assessing primary care data quality.

    PubMed

    Lim, Yvonne Mei Fong; Yusof, Maryati; Sivasampu, Sheamini

    2018-04-16

    Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.

  4. Research progress of on-line automatic monitoring of chemical oxygen demand (COD) of water

    NASA Astrophysics Data System (ADS)

    Cai, Youfa; Fu, Xing; Gao, Xiaolu; Li, Lianyin

    2018-02-01

    With the increasingly stricter control of pollutant emission in China, the on-line automatic monitoring of water quality is particularly urgent. The chemical oxygen demand (COD) is a comprehensive index to measure the contamination caused by organic matters, and thus it is taken as one important index of energy-saving and emission reduction in China’s “Twelve-Five” program. So far, the COD on-line automatic monitoring instrument has played an important role in the field of sewage monitoring. This paper reviews the existing methods to achieve on-line automatic monitoring of COD, and on the basis, points out the future trend of the COD on-line automatic monitoring instruments.

  5. Assessing the quality of cost management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fayne, V.; McAllister, A.; Weiner, S.B.

    1995-12-31

    Managing environmental programs can be effective only when good cost and cost-related management practices are developed and implemented. The Department of Energy`s Office of Environmental Management (EM), recognizing this key role of cost management, initiated several cost and cost-related management activities including the Cost Quality Management (CQM) Program. The CQM Program includes an assessment activity, Cost Quality Management Assessments (CQMAs), and a technical assistance effort to improve program/project cost effectiveness. CQMAs provide a tool for establishing a baseline of cost-management practices and for measuring improvement in those practices. The result of the CQMA program is an organization that has anmore » increasing cost-consciousness, improved cost-management skills and abilities, and a commitment to respond to the public`s concerns for both a safe environment and prudent budget outlays. The CQMA program is part of the foundation of quality management practices in DOE. The CQMA process has contributed to better cost and cost-related management practices by providing measurements and feedback; defining the components of a quality cost-management system; and helping sites develop/improve specific cost-management techniques and methods.« less

  6. A new method for water quality assessment: by harmony degree equation.

    PubMed

    Zuo, Qiting; Han, Chunhui; Liu, Jing; Ma, Junxia

    2018-02-22

    Water quality assessment is an important basic work in the development, utilization, management, and protection of water resources, and also a prerequisite for water safety. In this paper, the harmony degree equation (HDE) was introduced into the research of water quality assessment, and a new method for water quality assessment was proposed according to the HDE: by harmony degree equation (WQA-HDE). First of all, the calculation steps and ideas of this method were described in detail, and then, this method with some other important methods of water quality assessment (single factor assessment method, mean-type comprehensive index assessment method, and multi-level gray correlation assessment method) were used to assess the water quality of the Shaying River (the largest tributary of the Huaihe in China). For this purpose, 2 years (2013-2014) dataset of nine water quality variables covering seven monitoring sites, and approximately 189 observations were used to compare and analyze the characteristics and advantages of the new method. The results showed that the calculation steps of WQA-HDE are similar to the comprehensive assessment method, and WQA-HDE is more operational comparing with the results of other water quality assessment methods. In addition, this new method shows good flexibility by setting the judgment criteria value HD 0 of water quality; when HD 0  = 0.8, the results are closer to reality, and more realistic and reliable. Particularly, when HD 0  = 1, the results of WQA-HDE are consistent with the single factor assessment method, both methods are subject to the most stringent "one vote veto" judgment condition. So, WQA-HDE is a composite method that combines the single factor assessment and comprehensive assessment. This research not only broadens the research field of theoretical method system of harmony theory but also promotes the unity of water quality assessment method and can be used for reference in other comprehensive assessment.

  7. Published methodological quality of randomized controlled trials does not reflect the actual quality assessed in protocols.

    PubMed

    Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P; Kumar, Ambuj

    2012-06-01

    To assess whether the reported methodological quality of randomized controlled trials (RCTs) reflects the actual methodological quality and to evaluate the association of effect size (ES) and sample size with methodological quality. Systematic review. This is a retrospective analysis of all consecutive phase III RCTs published by eight National Cancer Institute Cooperative Groups up to 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Four hundred twenty-nine RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94; 95% confidence interval [CI]: 0.88, 0.99) and 24% (RHR: 1.24; 95% CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. The largest study to date shows that poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  9. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Internal quality assessment and performance improvement activities. 460.136 Section 460.136 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES....136 Internal quality assessment and performance improvement activities. (a) Quality assessment and...

  10. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in quality assessment and performance improvement activities, including providing information about... 42 Public Health 4 2014-10-01 2014-10-01 false Internal quality assessment and performance...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460...

  11. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... in quality assessment and performance improvement activities, including providing information about... 42 Public Health 4 2012-10-01 2012-10-01 false Internal quality assessment and performance...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460...

  12. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in quality assessment and performance improvement activities, including providing information about... 42 Public Health 4 2013-10-01 2013-10-01 false Internal quality assessment and performance...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460...

  13. Illinois Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Illinois' Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  14. A new approach to subjectively assess quality of plenoptic content

    NASA Astrophysics Data System (ADS)

    Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj

    2016-09-01

    Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.

  15. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  16. 42 CFR 493.1239 - Standard: General laboratory systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of general laboratory systems quality assessment reviews with appropriate staff. (c) The laboratory must document all general laboratory systems quality assessment activities. [68 FR 3703, Jan. 24, 2003... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: General laboratory systems quality...

  17. Quality control in public participation assessments of water quality: the OPAL Water Survey.

    PubMed

    Rose, N L; Turner, S D; Goldsmith, B; Gosling, L; Davidson, T A

    2016-07-22

    Public participation in scientific data collection is a rapidly expanding field. In water quality surveys, the involvement of the public, usually as trained volunteers, generally includes the identification of aquatic invertebrates to a broad taxonomic level. However, quality assurance is often not addressed and remains a key concern for the acceptance of publicly-generated water quality data. The Open Air Laboratories (OPAL) Water Survey, launched in May 2010, aimed to encourage interest and participation in water science by developing a 'low-barrier-to-entry' water quality survey. During 2010, over 3000 participant-selected lakes and ponds were surveyed making this the largest public participation lake and pond survey undertaken to date in the UK. But the OPAL approach of using untrained volunteers and largely anonymous data submission exacerbates quality control concerns. A number of approaches were used in order to address data quality issues including: sensitivity analysis to determine differences due to operator, sampling effort and duration; direct comparisons of identification between participants and experienced scientists; the use of a self-assessment identification quiz; the use of multiple participant surveys to assess data variability at single sites over short periods of time; comparison of survey techniques with other measurement variables and with other metrics generally considered more accurate. These quality control approaches were then used to screen the OPAL Water Survey data to generate a more robust dataset. The OPAL Water Survey results provide a regional and national assessment of water quality as well as a first national picture of water clarity (as suspended solids concentrations). Less than 10 % of lakes and ponds surveyed were 'poor' quality while 26.8 % were in the highest water quality band. It is likely that there will always be a question mark over untrained volunteer generated data simply because quality assurance is uncertain

  18. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  19. Assessment Quality and Practices in Secondary PE in the Netherlands

    ERIC Educational Resources Information Center

    Borghouts, Lars B.; Slingerland, Menno; Haerens, Leen

    2017-01-01

    Background: Assessment can have various functions, and is an important impetus for student learning. For assessment to be effective, it should be aligned with curriculum goals and of sufficient quality. Although it has been suggested that assessment quality in physical education (PE) is suboptimal, research into actual assessment practices has…

  20. Evaluation of Pan-Sharpening Methods for Automatic Shadow Detection in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    de Azevedo, Samara C.; Singh, Ramesh P.; da Silva, Erivaldo A.

    2017-04-01

    Finer spatial resolution of areas with tall objects within urban environment causes intense shadows that lead to wrong information in urban mapping. Due to the shadows, automatic detection of objects (such as buildings, trees, structures, towers) and to estimate the surface coverage from high spatial resolution is difficult. Thus, automatic shadow detection is the first necessary preprocessing step to improve the outcome of many remote sensing applications, particularly for high spatial resolution images. Efforts have been made to explore spatial and spectral information to evaluate such shadows. In this paper, we have used morphological attribute filtering to extract contextual relations in an efficient multilevel approach for high resolution images. The attribute selected for the filtering was the area estimated from shadow spectral feature using the Normalized Saturation-Value Difference Index (NSVDI) derived from pan-sharpening images. In order to assess the quality of fusion products and the influence on shadow detection algorithm, we evaluated three pan-sharpening methods - Intensity-Hue-Saturation (IHS), Principal Components (PC) and Gran-Schmidt (GS) through the image quality measures: Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Dimensionless Global Error in Synthesis (ERGAS) and Universal Image Quality Index (UIQI). Experimental results over Worldview II scene from São Paulo city (Brazil) show that GS method provides good correlation with original multispectral bands with no radiometric and contrast distortion. The automatic method using GS method for NSDVI generation clearly provide a clear distinction of shadows and non-shadows pixels with an overall accuracy more than 90%. The experimental results confirm the effectiveness of the proposed approach which could be used for further shadow removal and reliable for object recognition, land-cover mapping, 3D reconstruction, etc. especially in developing countries where land use and

  1. Doctors or technicians: assessing quality of medical education

    PubMed Central

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products. PMID:23745059

  2. Doctors or technicians: assessing quality of medical education.

    PubMed

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.

  3. NET-VISA, a Bayesian method next-generation automatic association software. Latest developments and operational assessment.

    NASA Astrophysics Data System (ADS)

    Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar

    2017-04-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process

  4. Assessment of the Denver Regional Transportation District's automatic vehicle location system

    DOT National Transportation Integrated Search

    2000-08-01

    The purpose of this evaluation was to determine how well the Denver Regional Transportation District's (RTD) automatic vehicle location (AVL) system achieved its major objectives of improving scheduling efficiency, improving the ability of dispatcher...

  5. The Aeroflex: A Bicycle for Mobile Air Quality Measurements

    PubMed Central

    Elen, Bart; Peters, Jan; Van Poppel, Martine; Bleux, Nico; Theunis, Jan; Reggente, Matteo; Standaert, Arnout

    2013-01-01

    Fixed air quality stations have limitations when used to assess people's real life exposure to air pollutants. Their spatial coverage is too limited to capture the spatial variability in, e.g., an urban or industrial environment. Complementary mobile air quality measurements can be used as an additional tool to fill this void. In this publication we present the Aeroflex, a bicycle for mobile air quality monitoring. The Aeroflex is equipped with compact air quality measurement devices to monitor ultrafine particle number counts, particulate mass and black carbon concentrations at a high resolution (up to 1 second). Each measurement is automatically linked to its geographical location and time of acquisition using GPS and Internet time. Furthermore, the Aeroflex is equipped with automated data transmission, data pre-processing and data visualization. The Aeroflex is designed with adaptability, reliability and user friendliness in mind. Over the past years, the Aeroflex has been successfully used for high resolution air quality mapping, exposure assessment and hot spot identification. PMID:23262484

  6. The Aeroflex: a bicycle for mobile air quality measurements.

    PubMed

    Elen, Bart; Peters, Jan; Poppel, Martine Van; Bleux, Nico; Theunis, Jan; Reggente, Matteo; Standaert, Arnout

    2012-12-24

    Fixed air quality stations have limitations when used to assess people's real life exposure to air pollutants. Their spatial coverage is too limited to capture the spatial variability in, e.g., an urban or industrial environment. Complementary mobile air quality measurements can be used as an additional tool to fill this void. In this publication we present the Aeroflex, a bicycle for mobile air quality monitoring. The Aeroflex is equipped with compact air quality measurement devices to monitor ultrafine particle number counts, particulate mass and black carbon concentrations at a high resolution (up to 1 second). Each measurement is automatically linked to its geographical location and time of acquisition using GPS and Internet time. Furthermore, the Aeroflex is equipped with automated data transmission, data pre-processing and data visualization. The Aeroflex is designed with adaptability, reliability and user friendliness in mind. Over the past years, the Aeroflex has been successfully used for high resolution air quality mapping, exposure assessment and hot spot identification. 

  7. Southwest principal aquifers regional ground-water quality assessment

    USGS Publications Warehouse

    Anning, D.W.; Thiros, Susan A.; Bexfield, L.M.; McKinney, T.S.; Green, J.M.

    2009-01-01

    The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey is conducting a regional analysis of water quality in the principal aquifers in the southwestern United States. The Southwest Principal Aquifers (SWPA) study is building a better understanding of the susceptibility and vulnerability of basin-fill aquifers in the region to ground-water contamination by synthesizing the baseline knowledge of ground-water quality conditions in 15 basins previously studied by the NAWQA Program. The improved understanding of aquifer susceptibility and vulnerability to contamination is assisting in the development of tools that water managers can use to assess and protect the quality of ground-water resources. This fact sheet provides an overview of the basin-fill aquifers in the southwestern United States and description of the completed and planned regional analyses of ground-water quality being performed by the SWPA study.

  8. A research review of quality assessment for software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.

  9. Quality assessments for cancer centers in the European Union.

    PubMed

    Wind, Anke; Rajan, Abinaya; van Harten, Wim H

    2016-09-07

    Cancer centers are pressured to deliver high-quality services that can be measured and improved, which has led to an increase of assessments in many countries. A critical area of quality improvement is to improve patient outcome. An overview of existing assessments can help stakeholders (e.g., healthcare professionals, managers and policy makers) improve the quality of cancer research and care and lead to patient benefits. This paper presents key aspects of assessments undertaken by European cancer centers, such as: are assessments mandatory or voluntary? Do they focus on evaluating research, care or both? And are they international or national? A survey was sent to 33 cancer centers in 28 European Union member states. Participants were asked to score the specifics for each assessment that they listed. Based on the responses from 19 cancer centers from 18 member states, we found 109 assessments. The numbers have steadily increased from 1990's till 2015. Although, a majority of assessments are on patient-care aspects (n = 45), it is unclear how many of those include assessing patient benefits. Only few assessments cover basic research. There is an increasing trend towards mixed assessments (i.e., combining research and patient-care aspects) The need for assessments in cancer centers is increasing. To improve efforts in the quality of research and patient care and to prevent new assessments that "reinvent the wheel", it is advised to start comparative research into the assessments that are likely to bring patient benefits and improve patient outcome. Do assessments provide consistent and reliable information that create added value for all key stakeholders?

  10. A systematic literature review of open source software quality assessment models.

    PubMed

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  11. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional quality...

  12. Water Quality Assessment using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  13. Quality Control of True Height Profiles Obtained Automatically from Digital Ionograms.

    DTIC Science & Technology

    1982-05-01

    nece.,ssary and Identify by block number) Ionosphere Digisonde Electron Density Profile Ionogram Autoscaling ARTIST 2 , ABSTRACT (Continue on reverae...analysis technique currently used with the ionogram traces scaled automatically by the ARTIST software [Reinisch and Huang, 1983; Reinisch et al...19841, and the generalized polynomial analysis technique POLAN [Titheridge, 1985], using the same ARTIST -identified ionogram traces. 2. To determine how

  14. Higher Education Quality Assessment in China: An Impact Study

    ERIC Educational Resources Information Center

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  15. Assessing Personal Qualities in Medical School Admissions.

    ERIC Educational Resources Information Center

    Albanese, Mark A.; Snow, Mikel H.; Skochelak, Susan E.; Huggett, Kathryn N.; Farrell, Philip M.

    2003-01-01

    Analyzes the challenges to using academic measures (MCAT scores and GPAs) as thresholds for medical school admissions and, for applicants exceeding the threshold, using personal qualities for admission decisions; reviews the literature on using the medical school interview and other admission data to assess personal qualities of applicants;…

  16. Portable Imagery Quality Assessment Test Field for Uav Sensors

    NASA Astrophysics Data System (ADS)

    Dąbrowski, R.; Jenerowicz, A.

    2015-08-01

    Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.

  17. Analysis of quality raw data of second generation sequencers with Quality Assessment Software.

    PubMed

    Ramos, Rommel Tj; Carneiro, Adriana R; Baumbach, Jan; Azevedo, Vasco; Schneider, Maria Pc; Silva, Artur

    2011-04-18

    Second generation technologies have advantages over Sanger; however, they have resulted in new challenges for the genome construction process, especially because of the small size of the reads, despite the high degree of coverage. Independent of the program chosen for the construction process, DNA sequences are superimposed, based on identity, to extend the reads, generating contigs; mismatches indicate a lack of homology and are not included. This process improves our confidence in the sequences that are generated. We developed Quality Assessment Software, with which one can review graphs showing the distribution of quality values from the sequencing reads. This software allow us to adopt more stringent quality standards for sequence data, based on quality-graph analysis and estimated coverage after applying the quality filter, providing acceptable sequence coverage for genome construction from short reads. Quality filtering is a fundamental step in the process of constructing genomes, as it reduces the frequency of incorrect alignments that are caused by measuring errors, which can occur during the construction process due to the size of the reads, provoking misassemblies. Application of quality filters to sequence data, using the software Quality Assessment, along with graphing analyses, provided greater precision in the definition of cutoff parameters, which increased the accuracy of genome construction.

  18. Development and Validation of a Consumer Quality Assessment Instrument for Dentistry.

    ERIC Educational Resources Information Center

    Johnson, Jeffrey D.; And Others

    1990-01-01

    This paper reviews the literature on consumer involvement in dental quality assessment, argues for inclusion of this information in quality assessment measures, outlines a conceptual model for measuring dental consumer quality assessment, and presents data relating to the development and validation of an instrument based on the conceptual model.…

  19. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  20. Assessing the quality of healthcare provided to children.

    PubMed

    Mangione-Smith, R; McGlynn, E A

    1998-10-01

    To present a conceptual framework for evaluating quality of care for children and adolescents, summarize the key issues related to developing measures to assess pediatric quality of care, examine some existing measures, and present evidence about their current level of performance. Assessing the quality of care for children poses many challenges not encountered when making these measurements in the adult population. Children and adolescents (from this point forward referred to collectively as children unless differentiation is necessary) differ from adults in two clinically important ways (Jameson and Wehr 1993): (1) their normal developmental trajectory is characterized by change, and (2) they have differential morbidity. These factors contribute to the limitations encountered when developing measures to assess the quality of care for children. The movement of a child through the various stages of development makes it difficult to establish what constitutes a "normal" outcome and by extension what constitutes a poor outcome. Additionally, salient developmental outcomes that result from poor quality of care may not be observed for several years. This implies that poor outcomes may be observed when the child is receiving care from a delivery system other than the one that provided the low-quality care. Attributing the suboptimal outcome to the new delivery system would be inappropriate. Differential morbidity refers to the fact that the type, prevalence, and severity of illness experienced by children is measurably different from that observed in adults. Most children experience numerous self-limited illness of mild severity. A minority of children suffer from markedly more severe diseases. Thus, condition-specific measures in children are problematic to implement for routine assessments because of the extremely low incidence and prevalence of most severe pediatric diseases (Halfon 1996). However, children with these conditions are potentially the segment of the

  1. A comprehensive framework for data quality assessment in CER.

    PubMed

    Holve, Erin; Kahn, Michael; Nahm, Meredith; Ryan, Patrick; Weiskopf, Nicole

    2013-01-01

    The panel addresses the urgent need to ensure that comparative effectiveness research (CER) findings derived from diverse and distributed data sources are based on credible, high-quality data; and that the methods used to assess and report data quality are consistent, comprehensive, and available to data consumers. The panel consists of representatives from four teams leveraging electronic clinical data for CER, patient centered outcomes research (PCOR), and quality improvement (QI) and seeks to change the current paradigm where data quality assessment (DQA) is performed "behind the scenes" using one-off project specific methods. The panelists will present their process of harmonizing existing models for describing and measuring clinical data quality and will describe a comprehensive integrated framework for assessing and reporting DQA findings. The collaborative project is supported by the Electronic Data Methods (EDM) Forum, a three-year grant from the Agency for Healthcare Research and Quality (AHRQ) to facilitate learning and foster collaboration across a set of CER, PCOR, and QI projects designed to build infrastructure and methods for collecting and analyzing prospective data from electronic clinical data .

  2. Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    1982-01-01

    Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)

  3. Quality Assurance--Best Practices for Assessing Online Programs

    ERIC Educational Resources Information Center

    Wang, Qi

    2006-01-01

    Educators have long sought to define quality in education. With the proliferation of distance education and online learning powered by the Internet, the tasks required to assess the quality of online programs become even more challenging. To assist educators and institutions in search of quality assurance methods to continuously improve their…

  4. Automatic affective appraisal of sexual penetration stimuli in women with vaginismus or dyspareunia.

    PubMed

    Huijding, Jorg; Borg, Charmaine; Weijmar-Schultz, Willibrord; de Jong, Peter J

    2011-03-01

    Current psychological views are that negative appraisals of sexual stimuli lie at the core of sexual dysfunctions. It is important to differentiate between deliberate appraisals and more automatic appraisals, as research has shown that the former are most relevant to controllable behaviors, and the latter are most relevant to reflexive behaviors. Accordingly, it can be hypothesized that in women with vaginismus, the persistent difficulty to allow vaginal entry is due to global negative automatic affective appraisals that trigger reflexive pelvic floor muscle contraction at the prospect of penetration. To test whether sexual penetration pictures elicited global negative automatic affective appraisals in women with vaginismus or dyspareunia and to examine whether deliberate appraisals and automatic appraisals differed between the two patient groups. Women with persistent vaginismus (N = 24), dyspareunia (N = 23), or no sexual complaints (N = 30) completed a pictorial Extrinsic Affective Simon Task (EAST), and then made a global affective assessment of the EAST stimuli using visual analogue scales (VAS). The EAST assessed global automatic affective appraisals of sexual penetration stimuli, while the VAS assessed global deliberate affective appraisals of these stimuli. Automatic affective appraisals of sexual penetration stimuli tended to be positive, independent of the presence of sexual complaints. Deliberate appraisals of the same stimuli were significantly more negative in the women with vaginismus than in the dyspareunia group and control group, while the latter two groups did not differ in their appraisals. Unexpectedly, deliberate appraisals seemed to be most important in vaginismus, whereas dyspareunia did not seem to implicate negative deliberate or automatic affective appraisals. These findings dispute the view that global automatic affect lies at the core of vaginismus and indicate that a useful element in therapeutic interventions may be the modification of

  5. Ultrasound assessment of fascial connectivity in the lower limb during maximal cervical flexion: technical aspects and practical application of automatic tracking.

    PubMed

    Cruz-Montecinos, Carlos; Cerda, Mauricio; Sanzana-Cuche, Rodolfo; Martín-Martín, Jaime; Cuesta-Vargas, Antonio

    2016-01-01

    The fascia provides and transmits forces for connective tissues, thereby regulating human posture and movement. One way to assess the myofascial interaction is a fascia ultrasound recording. Ultrasound can follow fascial displacement either manually or automatically through two-dimensional (2D) method. One possible method is the iterated Lucas-Kanade Pyramid (LKP) algorithm, which is based on automatic pixel tracking during passive movements in 2D fascial displacement assessments. Until now, the accumulated error over time has not been considered, even though it could be crucial for detecting fascial displacement in low amplitude movements. The aim of this study was to assess displacement of the medial gastrocnemius fascia during cervical spine flexion in a kyphotic posture with the knees extended and ankles at 90°. The ultrasound transducer was placed on the extreme dominant belly of the medial gastrocnemius. Displacement was calculated from nine automatically selected tracking points. To determine cervical flexion, an established 2D marker protocol was implemented. Offline pressure sensors were used to synchronize the 2D kinematic data from cervical flexion and deep fascia displacement of the medial gastrocnemius. Fifteen participants performed the cervical flexion task. The basal tracking error was 0.0211 mm. In 66 % of the subjects, a proximal fascial tissue displacement of the fascia above the basal error (0.076 mm ± 0.006 mm) was measured. Fascia displacement onset during cervical spine flexion was detected over 70 % of the cycle; however, only when detected for more than 80 % of the cycle was displacement considered statistically significant as compared to the first 10 % of the cycle (ANOVA, p < 0.05). By using an automated tracking method, the present analyses suggest statistically significant displacement of deep fascia. Further studies are needed to corroborate and fully understand the mechanisms associated with these results.

  6. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  7. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  8. DOGMA: domain-based transcriptome and proteome quality assessment.

    PubMed

    Dohmen, Elias; Kremer, Lukas P M; Bornberg-Bauer, Erich; Kemena, Carsten

    2016-09-01

    Genome studies have become cheaper and easier than ever before, due to the decreased costs of high-throughput sequencing and the free availability of analysis software. However, the quality of genome or transcriptome assemblies can vary a lot. Therefore, quality assessment of assemblies and annotations are crucial aspects of genome analysis pipelines. We developed DOGMA, a program for fast and easy quality assessment of transcriptome and proteome data based on conserved protein domains. DOGMA measures the completeness of a given transcriptome or proteome and provides information about domain content for further analysis. DOGMA provides a very fast way to do quality assessment within seconds. DOGMA is implemented in Python and published under GNU GPL v.3 license. The source code is available on https://ebbgit.uni-muenster.de/domainWorld/DOGMA/ CONTACTS: e.dohmen@wwu.de or c.kemena@wwu.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Assessing the quality of activities in a smart environment.

    PubMed

    Cook, Diane J; Schmitter-Edgecombe, M

    2009-01-01

    Pervasive computing technology can provide valuable health monitoring and assistance technology to help individuals live independent lives in their own homes. As a critical part of this technology, our objective is to design software algorithms that recognize and assess the consistency of activities of daily living that individuals perform in their own homes. We have designed algorithms that automatically learn Markov models for each class of activity. These models are used to recognize activities that are performed in a smart home and to identify errors and inconsistencies in the performed activity. We validate our approach using data collected from 60 volunteers who performed a series of activities in our smart apartment testbed. The results indicate that the algorithms correctly label the activities and successfully assess the completeness and consistency of the performed task. Our results indicate that activity recognition and assessment can be automated using machine learning algorithms and smart home technology. These algorithms will be useful for automating remote health monitoring and interventions.

  10. High-performance thin layer chromatography to assess pharmaceutical product quality.

    PubMed

    Kaale, Eliangiringa; Manyanga, Vicky; Makori, Narsis; Jenkins, David; Michael Hope, Samuel; Layloff, Thomas

    2014-06-01

    To assess the sustainability, robustness and economic advantages of high-performance thin layer chromatography (HPTLC) for quality control of pharmaceutical products. We compared three laboratories where three lots of cotrimoxazole tablets were assessed using different techniques for quantifying the active ingredient. The average assay relative standard deviation for the three lots was 1.2 with a range of 0.65-2.0. High-performance thin layer chromatography assessments are yielding valid results suitable for assessing product quality. The local pharmaceutical manufacturer had evolved the capacity to produce very high quality products. © 2014 John Wiley & Sons Ltd.

  11. Indoor Air Quality Building Education and Assessment Model

    EPA Pesticide Factsheets

    The Indoor Air Quality Building Education and Assessment Model (I-BEAM), released in 2002, is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

  12. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  13. A novel, fuzzy-based air quality index (FAQI) for air quality assessment

    NASA Astrophysics Data System (ADS)

    Sowlat, Mohammad Hossein; Gharibi, Hamed; Yunesian, Masud; Tayefeh Mahmoudi, Maryam; Lotfi, Saeedeh

    2011-04-01

    The ever increasing level of air pollution in most areas of the world has led to development of a variety of air quality indices for estimation of health effects of air pollution, though the indices have their own limitations such as high levels of subjectivity. Present study, therefore, aimed at developing a novel, fuzzy-based air quality index (FAQI ) to handle such limitations. The index developed by present study is based on fuzzy logic that is considered as one of the most common computational methods of artificial intelligence. In addition to criteria air pollutants (i.e. CO, SO 2, PM 10, O 3, NO 2), benzene, toluene, ethylbenzene, xylene, and 1,3-butadiene were also taken into account in the index proposed, because of their considerable health effects. Different weighting factors were then assigned to each pollutant according to its priority. Trapezoidal membership functions were employed for classifications and the final index consisted of 72 inference rules. To assess the performance of the index, a case study was carried out employing air quality data at five different sampling stations in Tehran, Iran, from January 2008 to December 2009, results of which were then compared to the results obtained from USEPA air quality index (AQI). According to the results from present study, fuzzy-based air quality index is a comprehensive tool for classification of air quality and tends to produce accurate results. Therefore, it can be considered useful, reliable, and suitable for consideration by local authorities in air quality assessment and management schemes. Fuzzy-based air quality index (FAQI).

  14. ABISM: an interactive image quality assessment tool for adaptive optics instruments

    NASA Astrophysics Data System (ADS)

    Girard, Julien H.; Tourneboeuf, Martin

    2016-07-01

    ABISM (Automatic Background Interactive Strehl Meter) is a interactive tool to evaluate the image quality of astronomical images. It works on seeing-limited point spread functions (PSF) but was developed in particular for diffraction-limited PSF produced by adaptive optics (AO) systems. In the VLT service mode (SM) operations framework, ABISM is designed to help support astronomers or telescope and instruments operators (TIOs) to quickly measure the Strehl ratio (SR) during or right after an observing block (OB) to evaluate whether it meets the requirements/predictions or whether is has to be repeated and will remain in the SM queue. It's a Python-based tool with a graphical user interface (GUI) that can be used with little AO knowledge. The night astronomer (NA) or Telescope and Instrument Operator (TIO) can launch ABISM in one click and the program is able to read keywords from the FITS header to avoid mistakes. A significant effort was also put to make ABISM as robust (and forgiven) with a high rate of repeatability. As a matter of fact, ABISM is able to automatically correct for bad pixels, eliminate stellar neighbours and estimate/fit properly the background, etc.

  15. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  16. Mississippi Quality Step System: QRS Profile. The Child Care Quality Rating System (QRS)Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Mississippi's Quality Step System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Application…

  17. Monitoring caustic injuries from emergency department databases using automatic keyword recognition software.

    PubMed

    Vignally, P; Fondi, G; Taggi, F; Pitidis, A

    2011-03-31

    In Italy the European Union Injury Database reports the involvement of chemical products in 0.9% of home and leisure accidents. The Emergency Department registry on domestic accidents in Italy and the Poison Control Centres record that 90% of cases of exposure to toxic substances occur in the home. It is not rare for the effects of chemical agents to be observed in hospitals, with a high potential risk of damage - the rate of this cause of hospital admission is double the domestic injury average. The aim of this study was to monitor the effects of injuries caused by caustic agents in Italy using automatic free-text recognition in Emergency Department medical databases. We created a Stata software program to automatically identify caustic or corrosive injury cases using an agent-specific list of keywords. We focused attention on the procedure's sensitivity and specificity. Ten hospitals in six regions of Italy participated in the study. The program identified 112 cases of injury by caustic or corrosive agents. Checking the cases by quality controls (based on manual reading of ED reports), we assessed 99 cases as true positive, i.e. 88.4% of the patients were automatically recognized by the software as being affected by caustic substances (99% CI: 80.6%- 96.2%), that is to say 0.59% (99% CI: 0.45%-0.76%) of the whole sample of home injuries, a value almost three times as high as that expected (p < 0.0001) from European codified information. False positives were 11.6% of the recognized cases (99% CI: 5.1%- 21.5%). Our automatic procedure for caustic agent identification proved to have excellent product recognition capacity with an acceptable level of excess sensitivity. Contrary to our a priori hypothesis, the automatic recognition system provided a level of identification of agents possessing caustic effects that was significantly much greater than was predictable on the basis of the values from current codifications reported in the European Database.

  18. Methodological Quality Assessment of Meta-analyses in Endodontics.

    PubMed

    Kattan, Sereen; Lee, Su-Min; Kohli, Meetu R; Setzer, Frank C; Karabucak, Bekir

    2018-01-01

    The objectives of this review were to assess the methodological quality of published meta-analyses related to endodontics using the assessment of multiple systematic reviews (AMSTAR) tool and to provide a follow-up to previously published reviews. Three electronic databases were searched for eligible studies according to the inclusion and exclusion criteria: Embase via Ovid, The Cochrane Library, and Scopus. The electronic search was amended by a hand search of 6 dental journals (International Endodontic Journal; Journal of Endodontics; Australian Endodontic Journal; Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology; Endodontics and Dental Traumatology; and Journal of Dental Research). The searches were conducted to include articles published after July 2009, and the deadline for inclusion of the meta-analyses was November 30, 2016. The AMSTAR assessment tool was used to evaluate the methodological quality of all included studies. A total of 36 reports of meta-analyses were included. The overall quality of the meta-analyses reports was found to be medium, with an estimated mean overall AMSTAR score of 7.25 (95% confidence interval, 6.59-7.90). The most poorly assessed areas were providing an a priori design, the assessment of the status of publication, and publication bias. In recent publications in the field of endodontics, the overall quality of the reported meta-analyses is medium according to AMSTAR. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Automatic indexing in a drug information portal.

    PubMed

    Sakji, Saoussen; Letord, Catherine; Dahamna, Badisse; Kergourlay, Ivan; Pereira, Suzanne; Joubert, Michel; Darmoni, Stéfan

    2009-01-01

    The objective of this work is to create a bilingual (French/English) Drug Information Portal (DIP), in a multi-terminological context and to emphasize its exploitation by an ATC automatic indexing allowing having more pertinent information about substances, organs or systems on which drugs act and their therapeutic and chemical characteristics. The development of the DIP was based on the CISMeF portal, which catalogues and indexes the most important and quality-controlled sources of institutional health information in French. DIP has created specific functionalities and uses specific drugs terminologies such as the ATC classification which used to automatic index the DIP resources. DIP is the result of collaboration between the CISMeF team and the VIDAL Company, specialized in drug information. DIP is conceived to facilitate the user information retrieval. The ATC automatic indexing provided relevant results in 76% of cases. Using multi-terminological context and in the framework of the drug field, indexing drugs with the appropriate codes or/and terms revealed to be very important to have the appropriate information storage and retrieval. The main challenge in the coming year is to increase the accuracy of the approach.

  20. Water quality assessment of Australian ports using water quality evaluation indices

    PubMed Central

    Jahan, Sayka

    2017-01-01

    Australian ports serve diverse and extensive activities, such as shipping, tourism and fisheries, which may all impact the quality of port water. In this work water quality monitoring at different ports using a range of water quality evaluation indices was applied to assess the port water quality. Seawater samples at 30 stations in the year 2016–2017 from six ports in NSW, Australia, namely Port Jackson, Botany, Kembla, Newcastle, Yamba and Eden, were investigated to determine the physicochemical and biological variables that affect the port water quality. The large datasets obtained were designed to determine the Water Quality Index, Heavy metal Evaluation Index, Contamination Index and newly developed Environmental Water Quality Index. The study revealed medium water quality index and high and medium heavy metal evaluation index at three of the study ports and high contamination index in almost all study ports. Low level dissolved oxygen and higher level of total dissolved solids, turbidity, fecal coliforms, copper, iron, lead, zinc, manganese, cadmium and cobalt are mainly responsible for the poor water qualities of the port areas. Good water quality at the background samples indicated that various port activities are the likely cause for poor water quality inside the port area. PMID:29244876

  1. Quality Assessment of TPB-Based Questionnaires: A Systematic Review

    PubMed Central

    Oluka, Obiageli Crystal; Nie, Shaofa; Sun, Yi

    2014-01-01

    Objective This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. Methods A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. Results 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. Conclusion Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics) in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process. PMID:24722323

  2. Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  3. Maine Quality for ME: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Maine's Quality for ME prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  4. Missouri Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Missouri's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  5. Miami-Dade Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Miami-Dade's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  6. Indiana Paths to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Indiana's Paths to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  7. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  8. An Application of Reverse Engineering to Automatic Item Generation: A Proof of Concept Using Automatically Generated Figures

    ERIC Educational Resources Information Center

    Lorié, William A.

    2013-01-01

    A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…

  9. Teacher Quality and Quality Teaching: Examining the Relationship of a Teacher Assessment to Practice

    ERIC Educational Resources Information Center

    Hill, Heather C.; Umland, Kristin; Litke, Erica; Kapitula, Laura R.

    2012-01-01

    Multiple-choice assessments are frequently used for gauging teacher quality. However, research seldom examines whether results from such assessments generalize to practice. To illuminate this issue, we compare teacher performance on a mathematics assessment, during mathematics instruction, and by student performance on a state assessment. Poor…

  10. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  11. Research Quality Assessment in Education: Impossible Science, Possible Art?

    ERIC Educational Resources Information Center

    Bridges, David

    2009-01-01

    For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This article explores some of the problems and possibilities, with particular reference to…

  12. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  13. Soil structural quality assessment for soil protection regulation

    NASA Astrophysics Data System (ADS)

    Johannes, Alice; Boivin, Pascal

    2017-04-01

    Soil quality assessment is rapidly developing worldwide, though mostly focused on the monitoring of arable land and soil fertility. Soil protection regulations assess soil quality differently, focusing on priority pollutants and threshold values. The soil physical properties are weakly considered, due to lack of consensus and experimental difficulties faced with characterization. Non-disputable, easy to perform and inexpensive methods should be available for environmental regulation to be applied, which is unfortunately not the case. As a consequence, quantitative soil physical protection regulation is not applied, and inexpensive soil physical quality indicators for arable soil management are not available. Overcoming these limitations was the objective of a research project funded by the Swiss federal office for environment (FOEN). The main results and the perspectives of application are given in this presentation. A first step of the research was to characterize soils in a good structural state (reference soils) under different land use. The structural quality was assessed with field expertise and Visual Evaluation of the Soil Structure (VESS), and the physical properties were assessed with Shrinkage analysis. The relationships between the physical properties and the soil constituents were linear and highly determined. They represent the reference properties of the corresponding soils. In a second step, the properties of physically degraded soils were analysed and compared to the reference properties. This allowed defining the most discriminant parameters departing the different structure qualities and their threshold limits. Equivalent properties corresponding to these parameters but inexpensive and easy to determine were defined and tested. More than 90% of the samples were correctly classed with this method, which meets, therefore, the requirements for practical application in regulation. Moreover, result-oriented agri-environmental schemes for soil quality

  14. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    ERIC Educational Resources Information Center

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial…

  15. Quality Assessment of University Studies as a Service: Dimensions and Criteria

    ERIC Educational Resources Information Center

    Pukelyte, Rasa

    2010-01-01

    This article reviews a possibility to assess university studies as a service. University studies have to be of high quality both in their content and in the administrative level. Therefore, quality of studies as a service is an important constituent part of study quality assurance. When assessing quality of university studies as a service, it is…

  16. Effects of multisensory environments on stereotyped behaviours assessed as maintained by automatic reinforcement.

    PubMed

    Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio

    2012-11-01

    The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Stereotyped and engaged behaviours of two young people with severe intellectual disabilities were observed while the participants were either in a living room or in a MSE and receiving either high or low levels of interaction from carers. For both participants, levels of stereotyped behaviour were lower in the MSE irrespective of the level of carer attention received, while levels of engagement were higher under conditions of high carer attention in both environments. The results are consistent with the hypothesis that reductions in stereotyped behaviour observed in MSEs are due to the increased levels of specific sensory stimulation provided by such environments. © 2012 Blackwell Publishing Ltd.

  17. Developing Matlab scripts for image analysis and quality assessment

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.

    2011-11-01

    Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.

  18. Assessing Question Quality Using NLP

    ERIC Educational Resources Information Center

    Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S.

    2017-01-01

    An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…

  19. Airport databases for 3D synthetic-vision flight-guidance displays: database design, quality assessment, and data generation

    NASA Astrophysics Data System (ADS)

    Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe

    1999-07-01

    In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite

  20. Automatic Generation of Heuristics for Scheduling

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.

    1997-01-01

    This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.

  1. Implementation of a hospital-based quality assessment program for rectal cancer.

    PubMed

    Hendren, Samantha; McKeown, Ellen; Morris, Arden M; Wong, Sandra L; Oerline, Mary; Poe, Lyndia; Campbell, Darrell A; Birkmeyer, Nancy J

    2014-05-01

    Quality improvement programs in Europe have had a markedly beneficial effect on the processes and outcomes of rectal cancer care. The quality of rectal cancer care in the United States is not as well understood, and scalable quality improvement programs have not been developed. The purpose of this article is to describe the implementation of a hospital-based quality assessment program for rectal cancer, targeting both community and academic hospitals. We recruited 10 hospitals from a surgical quality improvement organization. Nurse reviewers were trained to abstract rectal cancer data from hospital medical records, and abstracts were assessed for accuracy. We conducted two surveys to assess the training program and limitations of the data abstraction. We validated data completeness and accuracy by comparing hospital medical record and tumor registry data. Nine of 10 hospitals successfully performed abstractions with ≥ 90% accuracy. Experienced nurse reviewers were challenged by the technical details in operative and pathology reports. Although most variables had less than 10% missing data, outpatient testing information was lacking from some hospitals' inpatient records. This implementation project yielded a final quality assessment program consisting of 20 medical records variables and 11 tumor registry variables. An innovative program linking tumor registry data to quality-improvement data for rectal cancer quality assessment was successfully implemented in 10 hospitals. This data platform and training program can serve as a template for other organizations that are interested in assessing and improving the quality of rectal cancer care. Copyright © 2014 by American Society of Clinical Oncology.

  2. Evaluations of UltraiQ software for objective ultrasound image quality assessment using images from a commercial scanner.

    PubMed

    Long, Zaiyang; Tradup, Donald J; Stekel, Scott F; Gorny, Krzysztof R; Hangiandreou, Nicholas J

    2018-03-01

    We evaluated a commercially available software package that uses B-mode images to semi-automatically measure quantitative metrics of ultrasound image quality, such as contrast response, depth of penetration (DOP), and spatial resolution (lateral, axial, and elevational). Since measurement of elevational resolution is not a part of the software package, we achieved it by acquiring phantom images with transducers tilted at 45 degrees relative to the phantom. Each measurement was assessed in terms of measurement stability, sensitivity, repeatability, and semi-automated measurement success rate. All assessments were performed on a GE Logiq E9 ultrasound system with linear (9L or 11L), curved (C1-5), and sector (S1-5) transducers, using a CIRS model 040GSE phantom. In stability tests, the measurements of contrast, DOP, and spatial resolution remained within a ±10% variation threshold in 90%, 100%, and 69% of cases, respectively. In sensitivity tests, contrast, DOP, and spatial resolution measurements followed the expected behavior in 100%, 100%, and 72% of cases, respectively. In repeatability testing, intra- and inter-individual coefficients of variations were equal to or less than 3.2%, 1.3%, and 4.4% for contrast, DOP, and spatial resolution (lateral and axial), respectively. The coefficients of variation corresponding to the elevational resolution test were all within 9.5%. Overall, in our assessment, the evaluated package performed well for objective and quantitative assessment of the above-mentioned image qualities under well-controlled acquisition conditions. We are finding it to be useful for various clinical ultrasound applications including performance comparison between scanners from different vendors. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Automatic fluid dispenser

    NASA Technical Reports Server (NTRS)

    Sakellaris, P. C. (Inventor)

    1977-01-01

    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  4. National Water-Quality Assessment Program: Central Arizona Basins

    USGS Publications Warehouse

    Cordy, Gail E.

    1994-01-01

    In 1991, the U.S. Geological Survey (USGS) began to implement a full-scale National Water-Quality Assessment (NAWQA) program. The long-term goals of the NAWQA program are to describe the status and trends in the quality of a large, representative part of the Nation's surface-water and ground-water resources and to provide a sound, scientific understanding of the primary natural and human factors affecting the quality of these resources. In meeting these goals, the program will produce a wealth of water-quality information that will be useful to policymakers and managers at the National, State, and local levels. Studies of 60 hydrologic systems that include parts of most major river basins and aquifer systems (study-unit investigations) are the building blocks of the national assessment. The 60 study units range in size from 1,000 to about 60,000 mi2 and represent 60 to 70 percent of the Nation's water use and population served by public water supplies. Twenty study-unit investigations were started in 1991, 20 additional studies started in 1994, and 20 more are planned to start in 1997. The Central Arizona Basins study unit began assessment activities in 1994.

  5. Carotid stenosis assessment with multi-detector CT angiography: comparison between manual and automatic segmentation methods.

    PubMed

    Zhu, Chengcheng; Patterson, Andrew J; Thomas, Owen M; Sadat, Umar; Graves, Martin J; Gillard, Jonathan H

    2013-04-01

    Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods.

  6. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  7. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  8. Response analysis in histopathology external quality assessment schemes.

    PubMed

    Furness, P N; Lauder, I

    1993-04-01

    To develop a computerised method for analysing the results of histopathology external quality assessment (EQA) schemes which can provide confidential personal reports to individual participating pathologists. A program was developed using the OMNIS database system, running on Apple Macintosh or IBM compatible computers. The program produces a general report of participants' responses to each case, and a choice of two types of personal report. One of these provides a list of the participant's diagnoses with a list of the most popular (Consensus) diagnoses for comparison. The other provides automatically calculated scores for the pathologist's performance along with simple statistical evaluation. The scores can be calculated by comparison with the consensus of the group or with correct diagnoses if they are known. A histogram indicating the distribution of performance within the group can be produced. The program can accept uncertainty in the form of differential diagnosis lists from participants. Potentially dangerous diagnostic errors can be identified and handled separately. Participants are identified only by code numbers and confidentiality can easily be enforced. The program is currently being used in the national renal pathology EQA scheme and in the local general histopathology scheme in the East Midlands. This program offers solutions to problems which have bedevilled the organisers of histopathology EQA schemes. It offers confidential advice to pathologists and will help to identify areas where an individual might benefit from continuing career grade medical education. It raises the possibility of the development of nationally agreed standards of performance in the reporting of pathological specimens, and it may be applicable to other specialties where textual reports are produced.

  9. Adult orthodontics: a quality assessment of Internet information.

    PubMed

    McMorrow, Siobhán Mary; Millett, Declan T

    2016-09-01

    This study evaluated the quality, reliability and readability of information on the Internet on adult orthodontics. A quality assessment of adult orthodontic websites. Postgraduate Orthodontic Unit, Cork University Dental School and Hospital, Cork, Ireland. An Internet search using three search engines (Google, Yahoo and Bing) was conducted using the terms ('adult orthodontics' and 'adult braces'). The first 50 websites from each engine and under each search term were screened and exclusion criteria applied. Included websites were then assessed for quality using four methods: the HON seal, JAMA benchmarks, the DISCERN instrument and the LIDA tool. Readability of included websites was assessed using the Flesch Reading Ease Score (FRES). Only 13 websites met the inclusion criteria. Most were of US origin (n = 8; 61%). The authors of the websites were dentists (n = 5; 39%), professional organizations (n = 2; 15%), past patients (n = 2; 15%) and unspecified (n = 4; 31%). Only 1 website displayed the HON seal and three websites contained all JAMA benchmarks. The mean overall score for DISCERN was 3.9/5 and the mean total LIDA score was 115/144. The average FRES score was 63.1/100. The number of informative websites on adult orthodontics is low and these are of moderate quality. More accurate, high-quality Internet resources are required on adult orthodontics. Recommendations are made as to how this may be achieved.

  10. Developing quality indicators and auditing protocols from formal guideline models: knowledge representation and transformations.

    PubMed

    Advani, Aneel; Goldstein, Mary; Shahar, Yuval; Musen, Mark A

    2003-01-01

    Automated quality assessment of clinician actions and patient outcomes is a central problem in guideline- or standards-based medical care. In this paper we describe a model representation and algorithm for deriving structured quality indicators and auditing protocols from formalized specifications of guidelines used in decision support systems. We apply the model and algorithm to the assessment of physician concordance with a guideline knowledge model for hypertension used in a decision-support system. The properties of our solution include the ability to derive automatically context-specific and case-mix-adjusted quality indicators that can model global or local levels of detail about the guideline parameterized by defining the reliability of each indicator or element of the guideline.

  11. Automatic Conceptual Encoding of Printed Verbal Material: Assessment of Population Differences.

    ERIC Educational Resources Information Center

    Kee, Daniel W.; And Others

    1984-01-01

    The release from proactive interference task as used to investigate categorical encoding of items. Low socioeconomic status Black and middle socioeconomic status White children were compared. Conceptual encoding differences between these populations were not detected in automatic conceptual encoding but were detected when the free recall method…

  12. Michigan lakes: An assessment of water quality

    USGS Publications Warehouse

    Minnerick, R.J.

    2004-01-01

    Michigan has more than 11,000 inland lakes, that provide countless recreational opportunities and are an important resource that makes tourism and recreation a $15-billion-dollar per-year industry in the State (Stynes, 2002). Knowledge of the water-quality characteristics of inland lakes is essential for the current and future management of these resources.Historically the U. S. Geological Survey (USGS) and the Michigan Department of Environmental Quality (MDEQ) jointly have monitored water quality in Michigan's lakes and rivers. During the 1990's, however, funding for surface-water-quality monitoring was reduced greatly. In 1998, the citizens of Michigan passed the Clean Michigan Initiative to clean up, protect, and enhance Michigan's environmental infrastructure. Because of expanding water-quality-data needs, the MDEQ and the USGS jointly redesigned and implemented the Lake Water-Quality Assessment (LWQA) Monitoring Program (Michigan Department of Environmental Quality, 1997).

  13. Validity of portfolio assessment: which qualities determine ratings?

    PubMed

    Driessen, Erik W; Overeem, Karlijn; van Tartwijk, Jan; van der Vleuten, Cees P M; Muijtjens, Arno M M

    2006-09-01

    The portfolio is becoming increasingly accepted as a valuable tool for learning and assessment. The validity of portfolio assessment, however, may suffer from bias due to irrelevant qualities, such as lay-out and writing style. We examined the possible effects of such qualities in a portfolio programme aimed at stimulating Year 1 medical students to reflect on their professional and personal development. In later curricular years, this portfolio is also used to judge clinical competence. We developed an instrument, the Portfolio Analysis Scoring Inventory, to examine the impact of form and content aspects on portfolio assessment. The Inventory consists of 15 items derived from interviews with experienced mentors, the literature, and the criteria for reflective competence used in the regular portfolio assessment procedure. Forty portfolios, selected from 231 portfolios for which ratings from the regular assessment procedure were available, were rated by 2 researchers, independently, using the Inventory. Regression analysis was used to estimate the correlation between the ratings from the regular assessment and those resulting from the Inventory items. Inter-rater agreement ranged from 0.46 to 0.87. The strongest predictor of the variance in the regular ratings was 'quality of reflection' (R 0.80; R2 66%). No further items accounted for a significant proportion of variance. Irrelevant items, such as writing style and lay-out, had negligible effects. The absence of an impact of irrelevant criteria appears to support the validity of the portfolio assessment procedure. Further studies should examine the portfolio's validity for the assessment of clinical competence.

  14. Groundwater-quality data from the National Water-Quality Assessment Project, January through December 2014 and select quality-control data from May 2012 through December 2014

    USGS Publications Warehouse

    Arnold, Terri L.; Bexfield, Laura M.; Musgrove, MaryLynn; Lindsey, Bruce D.; Stackelberg, Paul E.; Barlow, Jeannie R.; Desimone, Leslie A.; Kulongoski, Justin T.; Kingsbury, James A.; Ayotte, Joseph D.; Fleming, Brandon J.; Belitz, Kenneth

    2017-10-05

    Groundwater-quality data were collected from 559 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from January through December 2014. The data were collected from four types of well networks: principal aquifer study networks, which are used to assess the quality of groundwater used for public water supply; land-use study networks, which are used to assess land-use effects on shallow groundwater quality; major aquifer study networks, which are used to assess the quality of groundwater used for domestic supply; and enhanced trends networks, which are used to evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, radionuclides, and some constituents of special interest (arsenic speciation, chromium [VI] and perchlorate). These groundwater-quality data, along with data from quality-control samples, are tabulated in this report and in an associated data release.

  15. Monitoring and Assessment of Youshui River Water Quality in Youyang

    NASA Astrophysics Data System (ADS)

    Wang, Xue-qin; Wen, Juan; Chen, Ping-hua; Liu, Na-na

    2018-02-01

    By monitoring the water quality of Youshui River from January 2016 to December 2016, according to the indicator grading and the assessment standard of water quality, the formulas for 3 types water quality indexes are established. These 3 types water quality indexes, the single indicator index Ai, single moment index Ak and the comprehensive water quality index A, were used to quantitatively evaluate the quality of single indicator, the water quality and the change of water quality with time. The results show that, both total phosphorus and fecal coliform indicators exceeded the standard, while the other 16 indicators measured up to the standard. The water quality index of Youshui River is 0.93 and the grade of water quality comprehensive assessment is level 2, which indicated that the water quality of Youshui River is good, and there is room for further improvement. To this end, several protection measures for Youshui River environmental management and pollution treatment are proposed.

  16. Assessing Quality in Graduate Programs: An Internal Quality Indicator. AIR Forum 1981 Paper.

    ERIC Educational Resources Information Center

    DiBiasio, Daniel A.; And Others

    Four approaches to measuring quality in graduate education are reviewed, and the approach used at the graduate school at Ohio State University is assessed. Four approaches found in the literature are: measuring quality by reputation, by scholarly productivity, by correlating reputation and scholarly productivity, and by multiple measures. Ohio…

  17. Display device-adapted video quality-of-experience assessment

    NASA Astrophysics Data System (ADS)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  18. Self-assessment procedure using fuzzy sets

    NASA Astrophysics Data System (ADS)

    Mimi, Fotini

    2000-10-01

    Self-Assessment processes, initiated by a company itself and carried out by its own people, are considered to be the starting point for a regular strategic or operative planning process to ensure a continuous quality improvement. Their importance has increased by the growing relevance and acceptance of international quality awards such as the Malcolm Baldrige National Quality Award, the European Quality Award and the Deming Prize. Especially award winners use the instrument of a systematic and regular Self-Assessment and not only because they have to verify their quality and business results for at least three years. The Total Quality Model of the European Foundation for Quality Management (EFQM), used for the European Quality Award, is the basis for Self-Assessment in Europe. This paper presents a self-assessment supporting method based on a methodology of fuzzy control systems providing an effective means of converting the linguistic approximation into an automatic control strategy. In particular, the elements of the Quality Model mentioned above are interpreted as linguistic variables. The LR-type of a fuzzy interval is used for their representation. The input data has a qualitative character based on empirical investigation and expert knowledge and therefore the base- variables are ordinal scaled. The aggregation process takes place on the basis of a hierarchical structure. Finally, in order to render the use of the method more practical a software system on PC basis is developed and implemented.

  19. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  20. Quality Management Plan for the Environmental Assessment and Innovation Division

    EPA Pesticide Factsheets

    Quality management plan (QMP) which identifies the mission, roles, responsibilities of personnel with regard to quality assurance and quality management for the environmental assessment and innovation division.

  1. Criteria for assessment of bridge aesthetic and visual quality

    NASA Astrophysics Data System (ADS)

    Rozentale, I.; Paeglitis, A.

    2017-10-01

    The bridge designers should find an ideal balance between structure, economy, buildability, aesthetics, durability and harmony with industrial or natural landscape. During the last years, the society has adopted documents providing procedures for evaluation of the impact of the structural appearance on surrounding landscape. The European Landscape Convention defines the landscape as an area perceived by people, whose character is the result of the action and interaction of natural and/or human factors. The Convention indicates the methods for clear and objective assessment of the landscape’s visual qualities. The esthetical qualities of bridge structures, appearance and attraction should satisfy not only the technicians - engineers and architects, but mostly the surrounding population. Each of these groups has a different perception of esthetical qualities of structure. Many authors have used different methods and criteria for assessment of bridge aesthetics. The aim of this paper is to provide an overview of the bridge aesthetic and visual quality assessment methods and criteria.

  2. Quality of health care and the need for assessment.

    PubMed

    Bosse, G; Ngoli, B; Leshabari, M T; Külker, R; Dämmrich, T; Abels, W; Breuer, J P; Kersten, R; Spies, C

    2011-09-01

    In many hospitals of developing countries quality of care is below the expected standard to maintain patient safety. In 2006, health care experts from Tanzania and Germany collaborated on a set of indicators to be used as a hospital performance assessment tool. The aim of this study was to introduce this tool and check its feasibility for use in a Tanzanian regional hospital. Within the hospital, independent observers assessed quantitatively structural quality and the performance of health care encounter using an itemized scale from 0 (0%) to 2 (100%) for each defined item. Outcome parameters were taken from the annual hospital report. In addition, semi-qualitative interviews with staff and patients were held to a) assess staff knowledge of the treatment guidelines published by the Tanzanian Ministry of Health and Social Welfare (MoHSW), b) assess attitudes and user motivation and c) authenticate the quantitative findings in a mixed-method triangulation approach. Structural quality in maternity was at 75% of the expected standard, while process quality ranged from 36% (Care of the newborn with APGAR score < 4) to 47% (normal delivery procedure). Staff knowledge ranged between 64% and 87% with low motivation and commitment given as contributing factors. Outcome (maternal mortality) was 481/100,000 live births with an infant mortality rate of 10%. The tool appeared to be feasible and effective in judging care quality. It provides a model for continuous quality improvement. Motivation of health care workers, a strong determinant of care process quality, might be improved by strengthening internal factors in health facilities. For conclusive validation, further studies using the tool must be conducted with larger numbers of institutions.

  3. Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.

    PubMed

    Denecke, Kerstin

    2016-01-01

    Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved.

  4. Low-cost oblique illumination: an image quality assessment.

    PubMed

    Ruiz-Santaquiteria, Jesus; Espinosa-Aranda, Jose Luis; Deniz, Oscar; Sanchez, Carlos; Borrego-Ramos, Maria; Blanco, Saul; Cristobal, Gabriel; Bueno, Gloria

    2018-01-01

    We study the effectiveness of several low-cost oblique illumination filters to improve overall image quality, in comparison with standard bright field imaging. For this purpose, a dataset composed of 3360 diatom images belonging to 21 taxa was acquired. Subjective and objective image quality assessments were done. The subjective evaluation was performed by a group of diatom experts by psychophysical test where resolution, focus, and contrast were assessed. Moreover, some objective nonreference image quality metrics were applied to the same image dataset to complete the study, together with the calculation of several texture features to analyze the effect of these filters in terms of textural properties. Both image quality evaluation methods, subjective and objective, showed better results for images acquired using these illumination filters in comparison with the no filtered image. These promising results confirm that this kind of illumination filters can be a practical way to improve the image quality, thanks to the simple and low cost of the design and manufacturing process. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Clinical Decision Support System to Enhance Quality Control of Spirometry Using Information and Communication Technologies

    PubMed Central

    2014-01-01

    Background We recently demonstrated that quality of spirometry in primary care could markedly improve with remote offline support from specialized professionals. It is hypothesized that implementation of automatic online assessment of quality of spirometry using information and communication technologies may significantly enhance the potential for extensive deployment of a high quality spirometry program in integrated care settings. Objective The objective of the study was to elaborate and validate a Clinical Decision Support System (CDSS) for automatic online quality assessment of spirometry. Methods The CDSS was done through a three step process including: (1) identification of optimal sampling frequency; (2) iterations to build-up an initial version using the 24 standard spirometry curves recommended by the American Thoracic Society; and (3) iterations to refine the CDSS using 270 curves from 90 patients. In each of these steps the results were checked against one expert. Finally, 778 spirometry curves from 291 patients were analyzed for validation purposes. Results The CDSS generated appropriate online classification and certification in 685/778 (88.1%) of spirometry testing, with 96% sensitivity and 95% specificity. Conclusions Consequently, only 93/778 (11.9%) of spirometry testing required offline remote classification by an expert, indicating a potential positive role of the CDSS in the deployment of a high quality spirometry program in an integrated care setting. PMID:25600957

  6. [Ecological environmental quality assessment of Hangzhou urban area based on RS and GIS].

    PubMed

    Xu, Pengwei; Zhao, Duo

    2006-06-01

    In allusion to the shortage of traditional ecological environmental quality assessment, this paper studied the spatial distribution of assessing factors at a mid-small scale, and the conversion of integer character to girding assessing cells. The main assessing factors including natural environmental condition, environmental quality, natural landscape and urbanization pressure, which were classified into four types with about eleven assessing factors, were selected from RS images and GIS-spatial analyzing environmental quality vector graph. Based on GIS, a comprehensive assessment model for the ecological environmental quality in Hangzhou urban area was established. In comparison with observed urban heat island effects, the assessment results were in good agreement with the ecological environmental quality in the urban area of Hangzhou.

  7. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Quality assessment and performance improvement program. 438.240 Section 438.240 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and...

  8. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Quality assessment and performance improvement program. 438.240 Section 438.240 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and...

  9. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Quality assessment and performance improvement program. 438.240 Section 438.240 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and...

  10. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Quality assessment and performance improvement program. 438.240 Section 438.240 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and...

  11. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  12. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  13. Factors Influencing Assessment Quality in Higher Vocational Education

    ERIC Educational Resources Information Center

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  14. Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research.

    PubMed

    Schinkoeth, Michaela; Antoniewicz, Franziska

    2017-01-01

    The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore

  15. Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research

    PubMed Central

    Schinkoeth, Michaela; Antoniewicz, Franziska

    2017-01-01

    The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore

  16. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  17. Analysis of results obtained using the automatic chemical control of the quality of the water heat carrier in the drum boiler of the Ivanovo CHP-3 power plant

    NASA Astrophysics Data System (ADS)

    Larin, A. B.; Kolegov, A. V.

    2012-10-01

    Results of industrial tests of the new method used for the automatic chemical control of the quality of boiler water of the drum-type power boiler ( P d = 13.8 MPa) are described. The possibility of using an H-cationite column for measuring the electric conductivity of an H-cationized sample of boiler water over a long period of time is shown.

  18. Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.

    PubMed

    Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan

    2016-12-14

    Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.

  19. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Quality assessment and performance improvement... Performance Improvement Measurement and Improvement Standards § 438.240 Quality assessment and performance improvement program. (a) General rules. (1) The State must require, through its contracts, that each MCO and...

  20. Objective quality assessment for multiexposure multifocus image fusion.

    PubMed

    Hassen, Rania; Wang, Zhou; Salama, Magdy M A

    2015-09-01

    There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.

  1. Roadway system assessment using bluetooth-based automatic vehicle identification travel time data.

    DOT National Transportation Integrated Search

    2012-12-01

    This monograph is an exposition of several practice-ready methodologies for automatic vehicle identification (AVI) data collection : systems. This includes considerations in the physical setup of the collection system as well as the interpretation of...

  2. A Quality Approach to Writing Assessment.

    ERIC Educational Resources Information Center

    Andrade, Joanne; Ryley, Helen

    1992-01-01

    A Colorado elementary school began its Total Quality Management work about a year ago after several staff members participated in an IBM Leadership Training Program addressing applications of Deming's theories. The school's new writing assessment has increased collegiality and cross-grade collaboration. (MLH)

  3. Validation of semi-automatic segmentation of the left atrium

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Holmes, D. R., III; Camp, J. J.; Packer, D. L.; Robb, R. A.

    2008-03-01

    Catheter ablation therapy has become increasingly popular for the treatment of left atrial fibrillation. The effect of this treatment on left atrial morphology, however, has not yet been completely quantified. Initial studies have indicated a decrease in left atrial size with a concomitant decrease in pulmonary vein diameter. In order to effectively study if catheter based therapies affect left atrial geometry, robust segmentations with minimal user interaction are required. In this work, we validate a method to semi-automatically segment the left atrium from computed-tomography scans. The first step of the technique utilizes seeded region growing to extract the entire blood pool including the four chambers of the heart, the pulmonary veins, aorta, superior vena cava, inferior vena cava, and other surrounding structures. Next, the left atrium and pulmonary veins are separated from the rest of the blood pool using an algorithm that searches for thin connections between user defined points in the volumetric data or on a surface rendering. Finally, pulmonary veins are separated from the left atrium using a three dimensional tracing tool. A single user segmented three datasets three times using both the semi-automatic technique as well as manual tracing. The user interaction time for the semi-automatic technique was approximately forty-five minutes per dataset and the manual tracing required between four and eight hours per dataset depending on the number of slices. A truth model was generated using a simple voting scheme on the repeated manual segmentations. A second user segmented each of the nine datasets using the semi-automatic technique only. Several metrics were computed to assess the agreement between the semi-automatic technique and the truth model including percent differences in left atrial volume, DICE overlap, and mean distance between the boundaries of the segmented left atria. Overall, the semi-automatic approach was demonstrated to be repeatable within

  4. Content-aware automatic cropping for consumer photos

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Tretter, Daniel; Lin, Qian

    2013-03-01

    Consumer photos are typically authored once, but need to be retargeted for reuse in various situations. These include printing a photo on different size paper, changing the size and aspect ratio of an embedded photo to accommodate the dynamic content layout of web pages or documents, adapting a large photo for browsing on small displays such as mobile phone screens, and improving the aesthetic quality of a photo that was badly composed at the capture time. In this paper, we propose a novel, effective, and comprehensive content-aware automatic cropping (hereafter referred to as "autocrop") method for consumer photos to achieve the above purposes. Our autocrop method combines the state-of-the-art context-aware saliency detection algorithm, which aims to infer the likely intent of the photographer, and the "branch-and-bound" efficient subwindow search optimization technique, which seeks to locate the globally optimal cropping rectangle in a fast manner. Unlike most current autocrop methods, which can only crop a photo into an arbitrary rectangle, our autocrop method can automatically crop a photo into either a rectangle of arbitrary dimensions or a rectangle of the desired aspect ratio specified by the user. The aggressiveness of the cropping operation may be either automatically determined by the method or manually indicated by the user with ease. In addition, our autocrop method is extended to support the cropping of a photo into non-rectangular shapes such as polygons of any number of sides. It may also be potentially extended to return multiple cropping suggestions, which will enable the creation of new photos to enrich the original photo collections. Our experimental results show that the proposed autocrop method in this paper can generate high-quality crops for consumer photos of various types.

  5. Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation

    PubMed Central

    Butler, Emily E.; Ward, Robert; Ramsey, Richard

    2015-01-01

    Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest. PMID:26079137

  6. Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation.

    PubMed

    Butler, Emily E; Ward, Robert; Ramsey, Richard

    2015-01-01

    Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest.

  7. Evaluation of Prototype Automatic Truck Rollover Warning Systems

    DOT National Transportation Integrated Search

    1998-01-01

    Three operating prototype Automatic Truck Rollover Warning Systems (ATRWS) installed on the Capital Beltway in Maryland and Virginia were evaluated for 3 years. The general objectives of this evaluation were to assess how the ATRWS performed and to d...

  8. Assessing the Quality of PhD Dissertations. A Survey of External Committee Members

    ERIC Educational Resources Information Center

    Kyvik, Svein; Thune, Taran

    2015-01-01

    This article reports on a study of the quality assessment of doctoral dissertations, and asks whether examiner characteristics influence assessment of research quality in PhD dissertations. Utilising a multi-dimensional concept of quality of PhD dissertations, we look at differences in assessment of research quality, and particularly test whether…

  9. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  10. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images

    PubMed Central

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  11. Automatic map generalisation from research to production

    NASA Astrophysics Data System (ADS)

    Nyberg, Rose; Johansson, Mikael; Zhang, Yang

    2018-05-01

    The manual work of map generalisation is known to be a complex and time consuming task. With the development of technology and societies, the demands for more flexible map products with higher quality are growing. The Swedish mapping, cadastral and land registration authority Lantmäteriet has manual production lines for databases in five different scales, 1 : 10 000 (SE10), 1 : 50 000 (SE50), 1 : 100 000 (SE100), 1 : 250 000 (SE250) and 1 : 1 million (SE1M). To streamline this work, Lantmäteriet started a project to automatically generalise geographic information. Planned timespan for the project is 2015-2022. Below the project background together with the methods for the automatic generalisation are described. The paper is completed with a description of results and conclusions.

  12. Automatic glaucoma diagnosis through medical imaging informatics.

    PubMed

    Liu, Jiang; Zhang, Zhuo; Wong, Damon Wing Kee; Xu, Yanwu; Yin, Fengshou; Cheng, Jun; Tan, Ngan Meng; Kwoh, Chee Keong; Xu, Dong; Tham, Yih Chung; Aung, Tin; Wong, Tien Yin

    2013-01-01

    Computer-aided diagnosis for screening utilizes computer-based analytical methodologies to process patient information. Glaucoma is the leading irreversible cause of blindness. Due to the lack of an effective and standard screening practice, more than 50% of the cases are undiagnosed, which prevents the early treatment of the disease. To design an automatic glaucoma diagnosis architecture automatic glaucoma diagnosis through medical imaging informatics (AGLAIA-MII) that combines patient personal data, medical retinal fundus image, and patient's genome information for screening. 2258 cases from a population study were used to evaluate the screening software. These cases were attributed with patient personal data, retinal images and quality controlled genome data. Utilizing the multiple kernel learning-based classifier, AGLAIA-MII, combined patient personal data, major image features, and important genome single nucleotide polymorphism (SNP) features. Receiver operating characteristic curves were plotted to compare AGLAIA-MII's performance with classifiers using patient personal data, images, and genome SNP separately. AGLAIA-MII was able to achieve an area under curve value of 0.866, better than 0.551, 0.722 and 0.810 by the individual personal data, image and genome information components, respectively. AGLAIA-MII also demonstrated a substantial improvement over the current glaucoma screening approach based on intraocular pressure. AGLAIA-MII demonstrates for the first time the capability of integrating patients' personal data, medical retinal image and genome information for automatic glaucoma diagnosis and screening in a large dataset from a population study. It paves the way for a holistic approach for automatic objective glaucoma diagnosis and screening.

  13. Educational Quality, Outcomes Assessment, and Policy Change: The Virginia Example

    ERIC Educational Resources Information Center

    Culver, Steve

    2010-01-01

    The higher education system in the Commonwealth of Virginia in the United States provides a case model for how discussions regarding educational quality and assessment of that quality have affected institutions' policy decisions and implementation. Using Levin's (1998) policy analysis framework, this essay explores how assessment of student…

  14. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    PubMed Central

    Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140

  15. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    NASA Astrophysics Data System (ADS)

    Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.

    2011-07-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  16. Indoor Air Quality Building Education and Assessment Model Forms

    EPA Pesticide Factsheets

    The Indoor Air Quality Building Education and Assessment Model (I-BEAM) is a guidance tool designed for use by building professionals and others interested in indoor air quality in commercial buildings.

  17. An Evaluation of Automatic Control System Concepts for General Aviation Airplanes

    NASA Technical Reports Server (NTRS)

    Stewart, E. C.

    1990-01-01

    A piloted simulation study of automatic longitudinal control systems for general aviation airplanes has been conducted. These automatic control systems were designed to make the simulated airplane easy to fly for a beginning or infrequent pilot. Different control systems are presented and their characteristics are documented. In a conventional airplane control system each cockpit controller commands combinations of both the airspeed and the vertical speed. The best system in the present study decoupled the airspeed and vertical speed responses to cockpit controller inputs. An important feature of the automatic system was that neither changing flap position nor maneuvering in steeply banked turns affected either the airspeed or the vertical speed. All the pilots who flew the control system simulation were favorably impressed with the very low workload and the excellent handling qualities of the simulated airplane.

  18. Do automatic reactions elicited by thoughts of romantic partner, mother, and self relate to adult romantic attachment?

    PubMed

    Zayas, Vivian; Shoda, Yuichi

    2005-08-01

    Three studies tested the expectation that automatic reactions elicited by the mental representation of one's current romantic partner, mother, and self relate to adult romantic attachment. Adult romantic attachment was assessed using multiple measures, and individual differences in automatic reactions were assessed by the Implicit Association Test (IAT). Studies 1 and 2 showed that automatic reactions elicited by thoughts of current romantic partner, but not by thoughts of self, were related to adult romantic attachment assessed at a specific (i.e., within one's current romantic relationship) and general level (i.e., across all romantic relationships). The pattern of results was stronger among individuals identified as attachment-schematic. Studies 2 and 3 showed that automatic reactions elicited by thoughts of one's mother were related to adult romantic attachment assessed at a general level. In all three studies, results did not differ depending on how adult romantic attachment was conceptualized (four styles vs. two dimensions).

  19. Assessing ECG signal quality indices to discriminate ECGs with artefacts from pathologically different arrhythmic ECGs.

    PubMed

    Daluwatte, C; Johannesen, L; Galeotti, L; Vicente, J; Strauss, D G; Scully, C G

    2016-08-01

    False and non-actionable alarms in critical care can be reduced by developing algorithms which assess the trueness of an arrhythmia alarm from a bedside monitor. Computational approaches that automatically identify artefacts in ECG signals are an important branch of physiological signal processing which tries to address this issue. Signal quality indices (SQIs) derived considering differences between artefacts which occur in ECG signals and normal QRS morphology have the potential to discriminate pathologically different arrhythmic ECG segments as artefacts. Using ECG signals from the PhysioNet/Computing in Cardiology Challenge 2015 training set, we studied previously reported ECG SQIs in the scientific literature to differentiate ECG segments with artefacts from arrhythmic ECG segments. We found that the ability of SQIs to discriminate between ECG artefacts and arrhythmic ECG varies based on arrhythmia type since the pathology of each arrhythmic ECG waveform is different. Therefore, to reduce the risk of SQIs classifying arrhythmic events as noise it is important to validate and test SQIs with databases that include arrhythmias. Arrhythmia specific SQIs may also minimize the risk of misclassifying arrhythmic events as noise.

  20. Endorsing good quality assurance practices in molecular pathology: risks and recommendations for diagnostic laboratories and external quality assessment providers.

    PubMed

    Tembuyser, Lien; Dequeker, Elisabeth M C

    2016-01-01

    Quality assurance is an indispensable element in a molecular diagnostic laboratory. The ultimate goal is to warrant patient safety. Several risks that can compromise high quality procedures are at stake, from sample collection to the test performed by the laboratory, the reporting of test results to clinicians, and the organization of effective external quality assessment schemes. Quality assurance should therefore be safeguarded at each level and should imply a holistic multidisciplinary approach. This review aims to provide an overview of good quality assurance practices and discusses certain risks and recommendations to promote and improve quality assurance for both diagnostic laboratories and for external quality assessment providers. The number of molecular targets is continuously rising, and new technologies are evolving. As this poses challenges for clinical implementation and increases the demand for external quality assessment, the formation of an international association for improving quality assurance in molecular pathology is called for.

  1. Quality assessment of groundwater from the south-eastern Arabian Peninsula.

    PubMed

    Zhang, H W; Sun, Y Q; Li, Y; Zhou, X D; Tang, X Z; Yi, P; Murad, A; Hussein, S; Alshamsi, D; Aldahan, A; Yu, Z B; Chen, X G; Mugwaneza, V D P

    2017-08-01

    Assessment of groundwater quality plays a significant role in the utilization of the scarce water resources globally and especially in arid regions. The increasing abstraction together with man-made contamination and seawater intrusion have strongly affected groundwater quality in the Arabia Peninsula, exemplified by the investigation given here from the United Arab Emirates, where the groundwater is seldom reviewed and assessed. In the aim of assessing current groundwater quality, we here present a comparison of chemical data linked to aquifers types. The results reveal that most of the investigated groundwater is not suitable for drinking, household, and agricultural purposes following the WHO permissible limits. Aquifer composition and climate have vital control on the water quality, with the carbonate aquifers contain the least potable water compared to the ophiolites and Quaternary clastics. Seawater intrusion along coastal regions has deteriorated the water quality and the phenomenon may become more intensive with future warming climate and rising sea level.

  2. Attention to Automatic Movements in Parkinson's Disease: Modified Automatic Mode in the Striatum

    PubMed Central

    Wu, Tao; Liu, Jun; Zhang, Hejia; Hallett, Mark; Zheng, Zheng; Chan, Piu

    2015-01-01

    We investigated neural correlates when attending to a movement that could be made automatically in healthy subjects and Parkinson's disease (PD) patients. Subjects practiced a visuomotor association task until they could perform it automatically, and then directed their attention back to the automated task. Functional MRI was obtained during the early-learning, automatic stage, and when re-attending. In controls, attention to automatic movement induced more activation in the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex, and rostral supplementary motor area. The motor cortex received more influence from the cortical motor association regions. In contrast, the pattern of the activity and connectivity of the striatum remained at the level of the automatic stage. In PD patients, attention enhanced activity in the DLPFC, premotor cortex, and cerebellum, but the connectivity from the putamen to the motor cortex decreased. Our findings demonstrate that, in controls, when a movement achieves the automatic stage, attention can influence the attentional networks and cortical motor association areas, but has no apparent effect on the striatum. In PD patients, attention induces a shift from the automatic mode back to the controlled pattern within the striatum. The shifting between controlled and automatic behaviors relies in part on striatal function. PMID:24925772

  3. Quality Assessment of Process Measures in Antimicrobial Stewardship: Concordance of Valacyclovir Indication and Automatic Prospective Approval in Computerized Provider Order Entry

    PubMed Central

    Lee, Tiffany; McCoy, Christopher; Mahoney, Monica V

    2017-01-01

    Abstract Background The Infectious Diseases Society of America (IDSA) and the Society for Healthcare Epidemiology of America (SHEA) recommend computerized decision support at the time of prescribing as an antimicrobial stewardship (AST) tool. Providing antimicrobial indications during prescribing can optimize infection-specific therapy through appropriate antimicrobial selection, dosing, and frequency. The Leapfrog group identifies this as a quality measure for their report card system. At Beth Israel Deaconess Medical Center (BIDMC), indication-based dosing has been incorporated in the computerized provider order entry (CPOE) system since 2006. At BIDMC, valacyclovir is only approved for the treatment of varicella zoster (VZV) infection or prophylaxis of solid organ transplant (SOT) patients at low risk for cytomegalovirus. These indications bypass the need for AST approval. Accuracy validation of the selected indications has not been formally performed. Methods A retrospective chart review was performed in patients prescribed valacyclovir during an 8-month period in 2016. Electronic medical records, laboratory reports, and pharmacy records were reviewed to identify the suspected/confirmed infection. The primary outcome was the concordance rate of selected CPOE valacyclovir indication compared with suspected/confirmed infection at the time of ordering. The secondary outcome was the proportion of valacyclovir use per institutional protocol. Results Overall, 117 patients were included, with a median age of 57.9 years, 51 (43.6%) were male, and 4 (3.4%) were located in an intensive care unit. Fifty-nine orders (50.4%) selected VZV as the indication, followed by 21 orders (17.9%) for SOT prophylaxis. Of orders with any CPOE indication, only 59/101 (58.4%) were concordant with suspected/confirmed infection. Of the valacyclovir orders with a VZV indication, 37 (62.7%) were concordant. Of the orders with SOT prophylaxis indications, 5 (23.8%) were concordant

  4. New Hampshire Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of New Hampshire's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  5. [Wearable Automatic External Defibrillators].

    PubMed

    Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan

    2015-11-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.

  6. Assessment of the Quality Management Models in Higher Education

    ERIC Educational Resources Information Center

    Basar, Gulsun; Altinay, Zehra; Dagli, Gokmen; Altinay, Fahriye

    2016-01-01

    This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

  7. The Emergence of Quality Assessment in Brazilian Basic Education

    ERIC Educational Resources Information Center

    Kauko, Jaakko; Centeno, Vera Gorodski; Candido, Helena; Shiroma, Eneida; Klutas, Anni

    2016-01-01

    The focus in this article is on Brazilian education policy, specifically quality assurance and evaluation. The starting point is that quality, measured by means of large-scale assessments, is one of the key discursive justifications for educational change. The article addresses the questions of how quality evaluation became a significant feature…

  8. Comprehensive automatic assessment of retinal vascular abnormalities for computer-assisted retinopathy grading.

    PubMed

    Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.

  9. Assessment of surface water quality using a growing hierarchical self-organizing map: a case study of the Songhua River Basin, northeastern China, from 2011 to 2015.

    PubMed

    Jiang, Mingcen; Wang, Yeyao; Yang, Qi; Meng, Fansheng; Yao, Zhipeng; Cheng, Peixuan

    2018-03-30

    The analysis of a large number of multidimensional surface water monitoring data for extracting potential information plays an important role in water quality management. In this study, growing hierarchical self-organizing map (GHSOM) was applied to a water quality assessment of the Songhua River Basin in China using 22 water quality parameters monitored monthly from 13 monitoring sites from 2011 to 2015 (14,782 observations). The spatial and temporal features and correlation between the water quality parameters were explored, and the major contaminants were identified. The results showed that the downstream of the Second Songhua River had the worst water quality of the Songhua River Basin. The upstream and midstream of Nenjiang River and the Second Songhua River had the best. The major contaminants of the Songhua River were chemical oxygen demand (COD), ammonia nitrogen (NH 3 -N), total phosphorus (TP), and fecal coliform (FC). In the Songhua River, the water pollution at downstream has been gradually eased in years. However, FC and biochemical oxygen demand (BOD 5 ) showed growth over time. The component planes showed that three sets of parameters had positive correlations with each other. GHSOM was found to have advantages over self-organizing maps and hierarchical clustering analysis as follows: (1) automatically generating the necessary neurons, (2) intuitively exhibiting the hierarchical inheritance relationship between the original data, and (3) depicting the boundaries of the classification much more clearly. Therefore, the application of GHSOM in water quality assessments, especially with large amounts of monitoring data, enables the extraction of more information and provides strong support for water quality management.

  10. Cloud-Based Smart Health Monitoring System for Automatic Cardiovascular and Fall Risk Assessment in Hypertensive Patients.

    PubMed

    Melillo, P; Orrico, A; Scala, P; Crispino, F; Pecchia, L

    2015-10-01

    The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls.

  11. Assessment and the Quality of Educational Programmes: What Constitutes Evidence?

    ERIC Educational Resources Information Center

    Shay, Sueellen; Jawitz, Jeff

    2005-01-01

    In a climate of growing accountability for Higher Education, there is an increased demand on assessment to play an evaluative role. National, professional and institutional quality assurance systems expect that the assessment of student performance can be used to evaluate the quality of teachers, learners, programmes and even institutions for the…

  12. Analysis and Comparison of Some Automatic Vehicle Monitoring Systems

    DOT National Transportation Integrated Search

    1973-07-01

    In 1970 UMTA solicited proposals and selected four companies to develop systems to demonstrate the feasibility of different automatic vehicle monitoring techniques. The demonstrations culminated in experiments in Philadelphia to assess the performanc...

  13. A Development of Automatic Audit System for Written Informed Consent using Machine Learning.

    PubMed

    Yamada, Hitomi; Takemura, Tadamasa; Asai, Takahiro; Okamoto, Kazuya; Kuroda, Tomohiro; Kuwata, Shigeki

    2015-01-01

    In Japan, most of all the university and advanced hospitals have implemented both electronic order entry systems and electronic charting. In addition, all medical records are subjected to inspector audit for quality assurance. The record of informed consent (IC) is very important as this provides evidence of consent from the patient or patient's family and health care provider. Therefore, we developed an automatic audit system for a hospital information system (HIS) that is able to evaluate IC automatically using machine learning.

  14. A State-of-the-Art Assessment of Automatic Name Placement.

    DTIC Science & Technology

    1986-08-01

    develop an automatic name placement system. 11 Balodis, M., "Positioning of typography on maps," Proc. ACSM Pall Con- vention, Salt Lake City, Utah, Sept...1983, pp. 28-44. This article deals with the selection of typography for maps. It describes psycho-visual experiments with groups of individuals to...Polytechnic Institute, Troy, NY 12181, May 1984. (Also available as Tech. Rept. IPL-TR-063.) SBalodis, M., "Positioning of typography on maps," Proc

  15. Ohio Step Up to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Ohio's Step Up to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  16. Parasitology: United Kingdom National Quality Assessment Scheme.

    PubMed Central

    Hawthorne, M.; Chiodini, P. L.; Snell, J. J.; Moody, A. H.; Ramsay, A.

    1992-01-01

    AIMS: To assess the results from parasitology laboratories taking part in a quality assessment scheme between 1986 and 1991; and to compare performance with repeat specimens. METHODS: Quality assessment of blood parasitology, including tissue parasites (n = 444; 358 UK, 86 overseas), and faecal parasitology, including extra-intestinal parasites (n = 205; 141 UK, 64 overseas), was performed. RESULTS: Overall, the standard of performance was poor. A questionnaire distributed to participants showed that a wide range of methods was used, some of which were considered inadequate to achieve reliable results. Teaching material was distributed to participants from time to time in an attempt to improve standards. CONCLUSIONS: Since the closure of the IMLS fellowship course in 1972, fewer opportunities for specialised training in parasitology are available: more training is needed. Poor performance in the detection of malarial parasites is mainly attributable to incorrect speciation, misidentification, and lack of equipment such as an eyepiece graticule. PMID:1452791

  17. Effect of egg freshness on their automatic orientation.

    PubMed

    Jiang, Song; Zhu, Ticao; Jia, Danfeng; Yao, Jun; Jiang, Yiyi

    2018-05-01

    High-quality eggs in unified packaging are desired by egg production enterprises. Automatic orientation apparatus is frequently used to orient eggs uniformly to pointed-end-down position for packaging. However, such apparatus may not work accordingly if the eggs are stored under improper methods or for excessive storage time. To study the effect of egg freshness on the efficiency of automatic orientation process, the relationship between egg freshness and its orientation motions was investigated under different storage conditions. The results showed that as the storage time increased, centroid position and pointed-end-down turnover ratio decreased; other parameters such as eggs' obliquity at stationary state, horizontal deflection angle, speed, acceleration of axial motion, side-slip angle and rolling distance increased. However, the effects of storage time on the guiding distance of the guide rod were not apparent. In addition, the higher the storage temperature, the greater the changes of the final orientation states of eggs on the conveyor line. If the eggs were to be oriented uniformly, they should be stored for less than 25, 16, 10 and 7 days at 10 °C, 18 °C, 26 °C and 34 °C, respectively, under a relative humidity of 75%. The experimental results presented in this paper are very useful for quality control and quality assurance in egg production enterprises. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  18. Near Real-Time Automatic Data Quality Controls for the AERONET Version 3 Database: An Introduction to the New Level 1.5V Aerosol Optical Depth Data Product

    NASA Astrophysics Data System (ADS)

    Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.

    2015-12-01

    The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.

  19. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score.

    PubMed

    Ruiz, Jorge G; Priyadarshni, Shivani; Rahaman, Zubair; Cabrera, Kimberly; Dang, Stuti; Valencia, Willy M; Mintzer, Michael J

    2018-05-04

    Frailty is a state of vulnerability to stressors that is prevalent in older adults and is associated with higher morbidity, mortality and healthcare utilization. Multiple instruments are used to measure frailty; most are time-consuming. The Care Assessment Need (CAN) score is automatically generated from electronic health record data using a statistical model. The methodology for calculation of the CAN score is consistent with the deficit accumulation model of frailty. At a 95 percentile, the CAN score is a predictor of hospitalization and mortality in Veteran populations. The purpose of this study was to validate the CAN score as a screening tool for frailty in primary care. This is a cross-sectional, validation study compared the CAN score with a 40-item Frailty Index reference standard based on a comprehensive geriatric assessment. We included community-dwelling male patients over age 65 from an outpatient geriatric medicine clinic. We calculated the sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of the CAN score. 184 patients over age 65 were included in the study: 97.3% male, 64.2% White, 80.9% non-Hispanic. The CGA-based Frailty Index defined 14.1% as robust, 53.3% as prefrail and 32.6% as frail. For the frail, statistical analysis demonstrated that a CAN score of 55 provides sensitivity, specificity, PPV and NPV of 91.67, 40.32, 42.64 and 90.91% respectively whereas at a score of 95 the sensitivity, specificity, PPV and NPV were 43.33, 88.81, 63.41, 77.78% respectively. Area under the receiver operating characteristics curve was 0.736 (95% CI = .661-.811). CAN score is a potential screening tool for frailty among older adults; it is generated automatically and provides acceptable diagnostic accuracy. Hence, the CAN score may be a useful tool to primary care providers for detection of frailty in their patient panels.

  20. Measuring and Improving the Quality of Preprocedural Assessments.

    PubMed

    Manji, Farah; McCarty, Kelsey; Kurzweil, Vanessa; Mark, Eden; Rathmell, James P; Agarwala, Aalok V

    2017-06-01

    Preprocedural assessments are used by anesthesia providers to optimize perioperative care for patients undergoing invasive procedures. When these assessments are performed in advance by providers who are not caring for the patient during the procedure, there is an additional layer of complexity in ensuring that the workup meets the needs of the primary anesthesia care team. In this study, anesthesia providers were asked to rate the quality of preprocedural assessments prepared by other providers to evaluate anesthesia care team satisfaction. Quality ratings for preprocedural assessments were collected from anesthesia providers on the day of surgery using an electronic quality assurance tool from January 9, 2014 to October 21, 2014. Users could rate assessments as "exemplary," "satisfactory," or "unsatisfactory." Free text comments could be entered for any of the quality ratings chosen. A reviewer trained in clinical anesthesia categorized all comments as "positive," "constructive," or "neutral" and conducted in-depth chart reviews triggered by 67 "constructive" comments submitted during the first 3 months of data collection to further subcategorize perceived deficiencies in the preprocedural assessments. In May 2014, providers were asked to participate in a midpoint survey and provide general feedback about the preprocedural process and evaluations. 37,611 procedures requiring anesthesia were analyzed. Of the 17,522 (46.6%) cases with a rated preprocedural assessment, anesthesia providers rated 3828 (21.8%) as "exemplary," 13,454 (76.8%) as "satisfactory," and 240 (1.4%) as "unsatisfactory." The monthly proportion of "unsatisfactory" ratings ranged from 3.1% to 0% over the study period, whereas the midpoint survey showed that anesthesia providers estimated that the number of unsatisfactory evaluations was 11.5%. Preprocedural evaluations performed on inpatients received significantly better ratings than evaluations performed on outpatients by the preadmission

  1. 1995 mask industry quality assessment

    NASA Astrophysics Data System (ADS)

    Bishop, Chris; Strott, Al

    1995-12-01

    The third annual mask industry assessment will again survey various industry companies for key performance measurements in the areas of quality and delivery. This year's assessment is enhanced to include the area of safety and further breakdown of the data into 5-inch vs. 6- inch. The data compiled includes shipments, customer return rate, customer return reason, performance to schedule, plate survival yield, and throughput time (TPT) from 1988 through Q2, 1995. Contributor identities remain protected by utilizing Arthur Andersen & Company to ensure participant confidentiality. Participation in the past included representation of over 75% of the total merchant and captive mask volume in the United States. This year's assessment is expected to result in expanded participation by again inviting all mask suppliers domestically to participate as well as an impact from inviting international suppliers to participate.

  2. Developing Quality Indicators and Auditing Protocols from Formal Guideline Models: Knowledge Representation and Transformations

    PubMed Central

    Advani, Aneel; Goldstein, Mary; Shahar, Yuval; Musen, Mark A.

    2003-01-01

    Automated quality assessment of clinician actions and patient outcomes is a central problem in guideline- or standards-based medical care. In this paper we describe a model representation and algorithm for deriving structured quality indicators and auditing protocols from formalized specifications of guidelines used in decision support systems. We apply the model and algorithm to the assessment of physician concordance with a guideline knowledge model for hypertension used in a decision-support system. The properties of our solution include the ability to derive automatically (1) context-specific and (2) case-mix-adjusted quality indicators that (3) can model global or local levels of detail about the guideline (4) parameterized by defining the reliability of each indicator or element of the guideline. PMID:14728124

  3. Long-term quality assurance of [(18)F]-fluorodeoxyglucose (FDG) manufacturing.

    PubMed

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of (18)F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade "A" isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade "A" isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [(18)F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems.

  4. Long-term quality assurance of [18F]-fluorodeoxyglucose (FDG) manufacturing

    PubMed Central

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of 18F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade “A” isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade “A” isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [18F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems. PMID:27508102

  5. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most

  6. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can

  7. Quality control and quality assurance plan for bridge channel-stability assessments in Massachusetts

    USGS Publications Warehouse

    Parker, Gene W.; Pinson, Harlow

    1993-01-01

    A quality control and quality assurance plan has been implemented as part of the Massachusetts bridge scour and channel-stability assessment program. This program is being conducted by the U.S. Geological Survey, Massachusetts-Rhode Island District, in cooperation with the Massachusetts Highway Department. Project personnel training, data-integrity verification, and new data-management technologies are being utilized in the channel-stability assessment process to improve current data-collection and management techniques. An automated data-collection procedure has been implemented to standardize channel-stability assessments on a regular basis within the State. An object-oriented data structure and new image management tools are used to produce a data base enabling management of multiple data object classes. Data will be reviewed by assessors and data base managers before being merged into a master bridge-scour data base, which includes automated data-verification routines.

  8. Assessing Negative Automatic Thoughts: Psychometric Properties of the Turkish Version of the Cognition Checklist

    PubMed Central

    Batmaz, Sedat; Ahmet Yuncu, Ozgur; Kocbiyik, Sibel

    2015-01-01

    Background: Beck’s theory of emotional disorder suggests that negative automatic thoughts (NATs) and the underlying schemata affect one’s way of interpreting situations and result in maladaptive coping strategies. Depending on their content and meaning, NATs are associated with specific emotions, and since they are usually quite brief, patients are often more aware of the emotion they feel. This relationship between cognition and emotion, therefore, is thought to form the background of the cognitive content specificity hypothesis. Researchers focusing on this hypothesis have suggested that instruments like the cognition checklist (CCL) might be an alternative to make a diagnostic distinction between depression and anxiety. Objectives: The aim of the present study was to assess the psychometric properties of the Turkish version of the CCL in a psychiatric outpatient sample. Patients and Methods: A total of 425 psychiatric outpatients 18 years of age and older were recruited. After a structured diagnostic interview, the participants completed the hospital anxiety depression scale (HADS), the automatic thoughts questionnaire (ATQ), and the CCL. An exploratory factor analysis was performed, followed by an oblique rotation. The internal consistency, test-retest reliability, and concurrent and discriminant validity analyses were undertaken. Results: The internal consistency of the CCL was excellent (Cronbach’s α = 0.95). The test-retest correlation coefficients were satisfactory (r = 0.80, P < 0.001 for CCL-D, and r = 0.79, P < 0.001 for CCL-A). The exploratory factor analysis revealed that a two-factor solution best fit the data. This bidimensional factor structure explained 51.27 % of the variance of the scale. The first factor consisted of items related to anxious cognitions, and the second factor of depressive cognitions. The CCL subscales significantly correlated with the ATQ (rs 0.44 for the CCL-D, and 0.32 for the CCL-A) as well as the other measures of

  9. The impact of translucent fabric shades and control strategies on energy savings and visual quality

    NASA Astrophysics Data System (ADS)

    Wankanapon, Pimonmart

    shade colors. The results clearly show the benefit of automatic shade control strategies with integrated lighting control over a condition when shades are closed all day. The main contributor to the total energy savings is from lighting energy savings, followed by cooling energy savings. Shades provide greater benefit in a hot climate and in a moderate climate than in a cold climate. Different control strategies provide savings in the range of 7-35% for annual total space energy with higher savings with light colored shades. Control strategies of shades should be selected and optimized based on climate, orientation, window area, and window/shade properties. High performance glazings, when equipped with shades, show lower energy savings when compared to standard glazings. High transmittance/reflectance shades, such as white shades, perform better than dark shades in most of the cases due to higher lighting energy savings obtained with the automatic electric lighting control and the resulting cooling energy savings from rejection of some solar energy and a reduction in the heat from lights. A South orientation showed the least benefit of automatic control of shades when compare to other orientations due to the large fraction of time shades are required to provide visual comfort. Under automatic shade control, energy savings are higher the more often the shades can be raised. The different automatic control strategies present tradeoffs between energy savings and comfort. With regard to visual quality, daylight quality assessments on view, glare, luminance ratios, and UDI can be used to assess shade control strategies. Automatic shade control can increase the number of view hours while controlling sunlight penetration. With automatic shade control, more daylight hours can be provided within the beneficial range of 100-2000 lux compared to shades that are closed all day. For a person facing the window, discomfort glare is likely to increase the more often the shades are

  10. Using Data Mining for Wine Quality Assessment

    NASA Astrophysics Data System (ADS)

    Cortez, Paulo; Teixeira, Juliana; Cerdeira, António; Almeida, Fernando; Matos, Telmo; Reis, José

    Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.

  11. Automatic extraction of three-dimensional thoracic aorta geometric model from phase contrast MRI for morphometric and hemodynamic characterization.

    PubMed

    Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna

    2016-02-01

    To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.

  12. Web Service for Positional Quality Assessment: the Wps Tier

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2015-08-01

    In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

  13. Quality Assessment of Medical Apps that Target Medication-Related Problems.

    PubMed

    Loy, John Shiguang; Ali, Eskinder Eshetu; Yap, Kevin Yi-Lwern

    2016-10-01

    The advent of smartphones has enabled a plethora of medical apps for disease management. As of 2012, there are 40,000 health care-related mobile apps available in the market. Since most of these medical apps do not go through any stringent quality assessment, there is a risk of consumers being misinformed or misled by unreliable information. In this regard, apps that target medication-related problems (MRPs) are not an exception. There is little information on what constitutes quality in apps that target MRPs and how good the existing apps are. To develop a quality assessment tool for evaluating apps that target MRPs and assess the quality of such apps available in the major mobile app stores (iTunes and Google Play). The top 100 free and paid apps in the medical categories of iTunes and Google Play stores (total of 400 apps) were screened for inclusion in the final analysis. English language apps that targeted MRPs were downloaded on test devices to evaluate their quality. Apps intended for clinicians, patients, or both were eligible for evaluation. The quality assessment tool consisted of 4 sections (appropriateness, reliability, usability, privacy), which determined the overall quality of the apps. Apps that fulfilled the inclusion criteria were classified based on the presence of any 1 or more of the 5 features considered important for apps targeting MRPs (monitoring, interaction checker, dose calculator, medication information, medication record). Descriptive statistics and Mann-Whitney tests were used for analysis. Final analysis was based on 59 apps that fulfilled the study inclusion criteria. Apps with interaction checker (66.9%) and monitoring features (54.8%) had the highest and lowest overall qualities. Paid apps generally scored higher for usability than free apps (P = 0.006) but lower for privacy (P = 0.003). Half of the interaction checker apps were unable to detect interactions with herbal medications. Blood pressure and heart rate monitoring apps

  14. On the Relationship Between Automatic Attitudes and Self-Reported Sexual Assault in Men

    PubMed Central

    Widman, Laura; Olson, Michael

    2013-01-01

    Research and theory suggest rape supportive attitudes are important predictors of sexual assault; yet, to date, rape supportive attitudes have been assessed exclusively through self-report measures that are methodologically and theoretically limited. To address these limitations, the objectives of the current project were to: (1) develop a novel implicit rape attitude assessment that captures automatic attitudes about rape and does not rely on self-reports, and (2) examine the association between automatic rape attitudes and sexual assault perpetration. We predicted that automatic rape attitudes would be a significant unique predictor of sexual assault even when self-reported rape attitudes (i.e., rape myth acceptance and hostility toward women) were controlled. We tested the generalizability of this prediction in two independent samples: a sample of undergraduate college men (n = 75, M age = 19.3 years) and a sample of men from the community (n = 50, M age = 35.9 years). We found the novel implicit rape attitude assessment was significantly associated with the frequency of sexual assault perpetration in both samples and contributed unique variance in explaining sexual assault beyond rape myth acceptance and hostility toward women. We discuss the ways in which future research on automatic rape attitudes may significantly advance measurement and theory aimed at understanding and preventing sexual assault. PMID:22618119

  15. Automatic creation of three-dimensional avatars

    NASA Astrophysics Data System (ADS)

    Villa-Uriol, Maria-Cruz; Sainz, Miguel; Kuester, Falko; Bagherzadeh, Nader

    2003-01-01

    Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.

  16. Automatic Pre-Hospital Vital Signs Waveform and Trend Data Capture Fills Quality Management, Triage and Outcome Prediction Gaps

    PubMed Central

    Mackenzie, Colin F; Hu, Peter; Sen, Ayan; Dutton, Rick; Seebode, Steve; Floccare, Doug; Scalea, Tom

    2008-01-01

    Trauma Triage errors are frequent and costly. What happens in pre-hospital care remains anecdotal because of the dual responsibility of treatment (resuscitation and stabilization) and documentation in a time-critical environment. Continuous pre-hospital vital signs waveforms and numerical trends were automatically collected in our study. Abnormalities of pulse oximeter oxygen saturation (< 95%) and validated heart rate (> 100/min) showed better prediction of injury severity, need for immediate blood transfusion, intra-abdominal surgery, tracheal intubation and chest tube insertion than Trauma Registry data or Pre-hospital provider estimations. Automated means of data collection introduced the potential for more accurate and objective reporting of patient vital signs helping in evaluating quality of care and establishing performance indicators and benchmarks. Addition of novel and existing non-invasive monitors and waveform analyses could make the pulse oximeter the decision aid of choice to improve trauma patient triage. PMID:18999022

  17. The performance of an automatic acoustic-based program classifier compared to hearing aid users' manual selection of listening programs.

    PubMed

    Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias

    2018-03-01

    To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.

  18. Automatic Telescope Search for Extrasolar Planets

    NASA Technical Reports Server (NTRS)

    Henry, Gregory W.

    1998-01-01

    We are using automatic photoelectric telescopes at the Tennessee State University Center for Automated Space Science to search for planets around nearby stars in our galaxy. Over the past several years, wc have developed the capability to make extremely precise measurements of brightness changes in Sun-like stars with automatic telescopes. Extensive quality control and calibration measurements result in a precision of 0.l% for a single nightly observation and 0.0270 for yearly means, far better than previously thought possible with ground-based observations. We are able, for the first time, to trace brightness changes in Sun-like stars that are of similar amplitude to brightness changes in the Sun, whose changes can be observed only with space-based radiometers. Recently exciting discoveries of the first extrasolar planets have been announced, based on the detection of very small radial-velocity variations that imply the existence of planets in orbit around several Sun-like stars. Our precise brightness measurements have been crucial for the confirmation of these discoveries by helping to eliminate alternative explanations for the radial-velocity variations. With our automatic telescopes, we are also searching for transits of these planets across the disks of their stars in order to conclusively verify their existence. The detection of transits would provide the first direct measurements of the sizes, masses, and densities of these planets and, hence, information on their compositions and origins.

  19. Feedback Improvement in Automatic Program Evaluation Systems

    ERIC Educational Resources Information Center

    Skupas, Bronius

    2010-01-01

    Automatic program evaluation is a way to assess source program files. These techniques are used in learning management environments, programming exams and contest systems. However, use of automated program evaluation encounters problems: some evaluations are not clear for the students and the system messages do not show reasons for lost points.…

  20. Quality, management, and the interplay of self-assessment, process assessments, and performance-based observations

    NASA Astrophysics Data System (ADS)

    Willett, D. J.

    1993-04-01

    In this document, the author presents his observations on the topic of quality assurance (QA). Traditionally the focus of quality management has been on QA organizations, manuals, procedures, audits, and assessments; quality was measured by the degree of conformance to specifications or standards. Today quality is defined as satisfying user needs and is measured by user satisfaction. The author proposes that quality is the responsibility of line organizations and staff and not the responsibility of the QA group. This work outlines an effective Conduct of Operations program. The author concludes his observations with a discussion of how quality is analogous to leadership.

  1. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT.

    PubMed

    Scholtz, Jan-Erik; Wichmann, Julian L; Kaup, Moritz; Fischer, Sebastian; Kerl, J Matthias; Lehnert, Thomas; Vogl, Thomas J; Bauer, Ralf W

    2015-03-01

    To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. 77 patients (28 women, 49 men, mean age 65.3±14.4 years) with known or suspected spinal disorders (degenerative spine disease n=32; disc herniation n=36; traumatic vertebral fractures n=9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p<0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p<0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time-saving when reconstructions of 2 and more vertebrae are performed. Checking results of automatic labeling is necessary to prevent errors in labeling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Assessment of quality of life of parents of children with osteogenesis imperfecta.

    PubMed

    Szczepaniak-Kubat, Anna; Kurnatowska, Olga; Jakubowska-Pietkiewicz, Elzbieta; Chlebna-Sokół, Danuta

    2012-01-01

    The aim of the work was an objective assessment of the quality of life of parents of children with osteogenesis imperfecta (OI) and of its determinant factors. The survey answers of 25 parents were analyzed and contained demographic parameters, socioeconomic status information, quality of life of responses and type of support they have been receiving. In order to assess the effects of this children's disease on the quality of life of the parents, families were divided into two groups depending on the OI severity: group M--mild (type I and IV OI), group S--severe (type III OI). The objective of the work was carried out based on the WHOQOL-BREF quality of life questionnaire and measures of family status: education degree based on the International Standard Classification of Education (ISCED), a subjective assessment of the family's wealth (Perceived Family Wealth, PFW), and the family's financial resources (Family Affluence Scale, FAS). 56% of respondents assessed their global quality of life (Quality of Life, QL) as good, whereas 8% answered poor. Perception of general health status was similar. Life domains assessed in the WHOQOL-BREF questionnaire received the following mean values on a scale from 4 to 20 points: physical--12.2 +/- 1.2, psychological--15.04 +/- 2.2, environmental--13.32 +/- 2, social relationships--14.28 +/- 1.5. In the severe OI group, the environmental domain was assessed as worse than in the mild OI group and this assessment was statistically significant, despite the fact that the group of families with severe cases of OI received more support from the appropriate institutions. Indicators of socioeconomic status did not affect the respondents' assessment of their global quality of life. In the tested group of families, the child's disease did not affect either the global quality of life assessment or health of the respondents or their quality of life in terms of physical and mental status and social relationships. The parents of children with

  3. Water quality assessment and meta model development in Melen watershed - Turkey.

    PubMed

    Erturk, Ali; Gurel, Melike; Ekdal, Alpaslan; Tavsan, Cigdem; Ugurluoglu, Aysegul; Seker, Dursun Zafer; Tanik, Aysegul; Ozturk, Izzet

    2010-07-01

    Istanbul, being one of the highly populated metropolitan areas of the world, has been facing water scarcity since the past decade. Water transfer from Melen Watershed was considered as the most feasible option to supply water to Istanbul due to its high water potential and relatively less degraded water quality. This study consists of two parts. In the first part, water quality data covering 26 parameters from 5 monitoring stations were analyzed and assessed due to the requirements of the "Quality Required of Surface Water Intended for the Abstraction of Drinking Water" regulation. In the second part, a one-dimensional stream water quality model with simple water quality kinetics was developed. It formed a basic design for more advanced water quality models for the watershed. The reason for assessing the water quality data and developing a model was to provide information for decision making on preliminary actions to prevent any further deterioration of existing water quality. According to the water quality assessment at the water abstraction point, Melen River has relatively poor water quality with regard to NH(4)(+), BOD(5), faecal streptococcus, manganese and phenol parameters, and is unsuitable for drinking water abstraction in terms of COD, PO(4)(3-), total coliform, total suspended solids, mercury and total chromium parameters. The results derived from the model were found to be consistent with the water quality assessment. It also showed that relatively high inorganic nitrogen and phosphorus concentrations along the streams are related to diffuse nutrient loads that should be managed together with municipal and industrial wastewaters. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. Visual assessment of CPR quality during pediatric cardiac arrest: does point of view matter?

    PubMed

    Jones, Angela; Lin, Yiqun; Nettel-Aguirre, Alberto; Gilfoyle, Elaine; Cheng, Adam

    2015-05-01

    In many clinical settings, providers rely on visual assessment when delivering feedback on CPR quality. Little is known about the accuracy of visual assessment of CPR quality. We aimed to determine how accurate pediatric providers are in their visual assessment of CPR quality and to identify the optimal position relative to the patient for accurate CPR assessment. We videotaped high-quality CPR (based on 2010 American Heart Association guidelines) and 3 variations of poor quality CPR in a simulated resuscitation, filmed from the foot, head and the side of the manikin. Participants watched 12 videos and completed a questionnaire to assess CPR quality. One hundred and twenty-five participants were recruited. The overall accuracy of visual assessment of CPR quality was 65.6%. Accuracy was better from the side (70.8%) and foot (68.8%) of the bed when compared to the head of the bed (57.2%; p<0.001). The side was the best position for assessing depth (p<0.001). Rate assessment was equivalent between positions (p=0.58). The side and foot of the bed were superior to the head when assessing chest recoil (p<0.001). Factors associated with increased accuracy in visual assessment of CPR quality included recent CPR course completion (p=0.034) and involvement in more cardiac arrests as a team member (p=0.003). Healthcare providers struggle to accurately assess the quality of CPR using visual assessment. If visual assessment is being used, providers should stand at the side of the bed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Quality assessment of malaria laboratory diagnosis in South Africa.

    PubMed

    Dini, Leigh; Frean, John

    2003-01-01

    To assess the quality of malaria diagnosis in 115 South African laboratories participating in the National Health Laboratory Service Parasitology External Quality Assessment Programme we reviewed the results from 7 surveys from January 2000 to August 2002. The mean percentage incorrect result rate was 13.8% (95% CI 11.3-16.9%), which is alarmingly high, with about 1 in 7 blood films being incorrectly interpreted. Most participants with incorrect blood film interpretations had acceptable Giemsa staining quality, indicating that there is less of a problem with staining technique than with blood film interpretation. Laboratories in provinces in which malaria is endemic did not necessarily perform better than those in non-endemic areas. The results clearly suggest that malaria laboratory diagnosis throughout South Africa needs strengthening by improving laboratory standardization and auditing, training, quality assurance and referral resources.

  6. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image

  7. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.

  8. Relations between Automatically Extracted Motion Features and the Quality of Mother-Infant Interactions at 4 and 13 Months

    PubMed Central

    Egmose, Ida; Varni, Giovanna; Cordes, Katharina; Smith-Nielsen, Johanne; Væver, Mette S.; Køppe, Simo; Cohen, David; Chetouani, Mohamed

    2017-01-01

    Bodily movements are an essential component of social interactions. However, the role of movement in early mother-infant interaction has received little attention in the research literature. The aim of the present study was to investigate the relationship between automatically extracted motion features and interaction quality in mother-infant interactions at 4 and 13 months. The sample consisted of 19 mother-infant dyads at 4 months and 33 mother-infant dyads at 13 months. The coding system Coding Interactive Behavior (CIB) was used for rating the quality of the interactions. Kinetic energy of upper-body, arms and head motion was calculated and used as segmentation in order to extract coarse- and fine-grained motion features. Spearman correlations were conducted between the composites derived from the CIB and the coarse- and fine-grained motion features. At both 4 and 13 months, longer durations of maternal arm motion and infant upper-body motion were associated with more aversive interactions, i.e., more parent-led interactions and more infant negativity. Further, at 4 months, the amount of motion silence was related to more adaptive interactions, i.e., more sensitive and child-led interactions. Analyses of the fine-grained motion features showed that if the mother coordinates her head movements with her infant's head movements, the interaction is rated as more adaptive in terms of less infant negativity and less dyadic negative states. We found more and stronger correlations between the motion features and the interaction qualities at 4 compared to 13 months. These results highlight that motion features are related to the quality of mother-infant interactions. Factors such as infant age and interaction set-up are likely to modify the meaning and importance of different motion features. PMID:29326626

  9. Relations between Automatically Extracted Motion Features and the Quality of Mother-Infant Interactions at 4 and 13 Months.

    PubMed

    Egmose, Ida; Varni, Giovanna; Cordes, Katharina; Smith-Nielsen, Johanne; Væver, Mette S; Køppe, Simo; Cohen, David; Chetouani, Mohamed

    2017-01-01

    Bodily movements are an essential component of social interactions. However, the role of movement in early mother-infant interaction has received little attention in the research literature. The aim of the present study was to investigate the relationship between automatically extracted motion features and interaction quality in mother-infant interactions at 4 and 13 months. The sample consisted of 19 mother-infant dyads at 4 months and 33 mother-infant dyads at 13 months. The coding system Coding Interactive Behavior (CIB) was used for rating the quality of the interactions. Kinetic energy of upper-body, arms and head motion was calculated and used as segmentation in order to extract coarse- and fine-grained motion features. Spearman correlations were conducted between the composites derived from the CIB and the coarse- and fine-grained motion features. At both 4 and 13 months, longer durations of maternal arm motion and infant upper-body motion were associated with more aversive interactions, i.e., more parent-led interactions and more infant negativity. Further, at 4 months, the amount of motion silence was related to more adaptive interactions, i.e., more sensitive and child-led interactions. Analyses of the fine-grained motion features showed that if the mother coordinates her head movements with her infant's head movements, the interaction is rated as more adaptive in terms of less infant negativity and less dyadic negative states. We found more and stronger correlations between the motion features and the interaction qualities at 4 compared to 13 months. These results highlight that motion features are related to the quality of mother-infant interactions. Factors such as infant age and interaction set-up are likely to modify the meaning and importance of different motion features.

  10. Quality of life assessment in interventional radiology.

    PubMed

    Monsky, Wayne L; Khorsand, Derek; Nolan, Timothy; Douglas, David; Khanna, Pavan

    2014-03-01

    The aim of this review was to describe quality of life (QoL) questionnaires relevant to interventional radiology. Interventional radiologists perform a large number of palliative procedures. The effect of these therapies on QoL is important. This is particularly true for cancer therapies where procedures with marginal survival benefits may result in tremendous QoL benefits. Image-guided minimally invasive procedures should be compared to invasive procedures, with respect to QoL, as part of comparative effectiveness assessment. A large number of questionnaires have been validated for measurement of overall and disease-specific quality of life. Use of applicable QoL assessments can aid in evaluating clinical outcomes and help to further substantiate the need for minimally invasive image-guided procedures. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  11. Driving photomask supplier quality through automation

    NASA Astrophysics Data System (ADS)

    Russell, Drew; Espenscheid, Andrew

    2007-10-01

    In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.

  12. Methodology for stereoscopic motion-picture quality assessment

    NASA Astrophysics Data System (ADS)

    Voronov, Alexander; Vatolin, Dmitriy; Sumin, Denis; Napadovsky, Vyacheslav; Borisov, Alexey

    2013-03-01

    Creating and processing stereoscopic video imposes additional quality requirements related to view synchronization. In this work we propose a set of algorithms for detecting typical stereoscopic-video problems, which appear owing to imprecise setup of capture equipment or incorrect postprocessing. We developed a methodology for analyzing the quality of S3D motion pictures and for revealing their most problematic scenes. We then processed 10 modern stereo films, including Avatar, Resident Evil: Afterlife and Hugo, and analyzed changes in S3D-film quality over the years. This work presents real examples of common artifacts (color and sharpness mismatch, vertical disparity and excessive horizontal disparity) in the motion pictures we processed, as well as possible solutions for each problem. Our results enable improved quality assessment during the filming and postproduction stages.

  13. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  14. A comparison of different functions for predicted protein model quality assessment.

    PubMed

    Li, Juan; Fang, Huisheng

    2016-07-01

    In protein structure prediction, a considerable number of models are usually produced by either the Template-Based Method (TBM) or the ab initio prediction. The purpose of this study is to find the critical parameter in assessing the quality of the predicted models. A non-redundant template library was developed and 138 target sequences were modeled. The target sequences were all distant from the proteins in the template library and were aligned with template library proteins on the basis of the transformation matrix. The quality of each model was first assessed with QMEAN and its six parameters, which are C_β interaction energy (C_beta), all-atom pairwise energy (PE), solvation energy (SE), torsion angle energy (TAE), secondary structure agreement (SSA), and solvent accessibility agreement (SAE). Finally, the alignment score (score) was also used to assess the quality of model. Hence, a total of eight parameters (i.e., QMEAN, C_beta, PE, SE, TAE, SSA, SAE, score) were independently used to assess the quality of each model. The results indicate that SSA is the best parameter to estimate the quality of the model.

  15. AN ASSESSMENT OF AUTOMATIC SEWER FLOW SAMPLERS (EPA/600/2-75/065)

    EPA Science Inventory

    A brief review of the characteristics of storm and combined sewer flows is given followed by a general discussion of the purposes for and requirements of a sampling program. The desirable characteristics of automatic sampling equipment are set forth and problem areas are outlined...

  16. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    PubMed

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.

    PubMed

    Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A

    2017-01-01

    Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.

  18. 78 FR 15023 - Office of Health Assessment and Translation Webinar on the Assessment of Data Quality in Animal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-08

    ... and Translation Webinar on the Assessment of Data Quality in Animal Studies; Notice of Public Webinar...- based meeting on the assessment of data quality in animal studies. The Office of Health Assessment and... meetings with a focus on methodological issues related to OHAT implementing systematic review. The first...

  19. Automatic image assessment from facial attributes

    NASA Astrophysics Data System (ADS)

    Ptucha, Raymond; Kloosterman, David; Mittelstaedt, Brian; Loui, Alexander

    2013-03-01

    Personal consumer photography collections often contain photos captured by numerous devices stored both locally and via online services. The task of gathering, organizing, and assembling still and video assets in preparation for sharing with others can be quite challenging. Current commercial photobook applications are mostly manual-based requiring significant user interactions. To assist the consumer in organizing these assets, we propose an automatic method to assign a fitness score to each asset, whereby the top scoring assets are used for product creation. Our method uses cues extracted from analyzing pixel data, metadata embedded in the file, as well as ancillary tags or online comments. When a face occurs in an image, its features have a dominating influence on both aesthetic and compositional properties of the displayed image. As such, this paper will emphasize the contributions faces have on affecting the overall fitness score of an image. To understand consumer preference, we conducted a psychophysical study that spanned 27 judges, 5,598 faces, and 2,550 images. Preferences on a per-face and per-image basis were independently gathered to train our classifiers. We describe how to use machine learning techniques to merge differing facial attributes into a single classifier. Our novel methods of facial weighting, fusion of facial attributes, and dimensionality reduction produce stateof- the-art results suitable for commercial applications.

  20. Improved model quality assessment using ProQ2.

    PubMed

    Ray, Arjun; Lindahl, Erik; Wallner, Björn

    2012-09-10

    Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2

  1. Gaia: automated quality assessment of protein structure models.

    PubMed

    Kota, Pradeep; Ding, Feng; Ramachandran, Srinivas; Dokholyan, Nikolay V

    2011-08-15

    Increasing use of structural modeling for understanding structure-function relationships in proteins has led to the need to ensure that the protein models being used are of acceptable quality. Quality of a given protein structure can be assessed by comparing various intrinsic structural properties of the protein to those observed in high-resolution protein structures. In this study, we present tools to compare a given structure to high-resolution crystal structures. We assess packing by calculating the total void volume, the percentage of unsatisfied hydrogen bonds, the number of steric clashes and the scaling of the accessible surface area. We assess covalent geometry by determining bond lengths, angles, dihedrals and rotamers. The statistical parameters for the above measures, obtained from high-resolution crystal structures enable us to provide a quality-score that points to specific areas where a given protein structural model needs improvement. We provide these tools that appraise protein structures in the form of a web server Gaia (http://chiron.dokhlab.org). Gaia evaluates the packing and covalent geometry of a given protein structure and provides quantitative comparison of the given structure to high-resolution crystal structures. dokh@unc.edu Supplementary data are available at Bioinformatics online.

  2. Assessment of CT image quality using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Reginatto, M.; Anton, M.; Elster, C.

    2017-08-01

    One of the most promising approaches for evaluating CT image quality is task-specific quality assessment. This involves a simplified version of a clinical task, e.g. deciding whether an image belongs to the class of images that contain the signature of a lesion or not. Task-specific quality assessment can be done by model observers, which are mathematical procedures that carry out the classification task. The most widely used figure of merit for CT image quality is the area under the ROC curve (AUC), a quantity which characterizes the performance of a given model observer. In order to estimate AUC from a finite sample of images, different approaches from classical statistics have been suggested. The goal of this paper is to introduce task-specific quality assessment of CT images to metrology and to propose a novel Bayesian estimation of AUC for the channelized Hotelling observer (CHO) applied to the task of detecting a lesion at a known image location. It is assumed that signal-present and signal-absent images follow multivariate normal distributions with the same covariance matrix. The Bayesian approach results in a posterior distribution for the AUC of the CHO which provides in addition a complete characterization of the uncertainty of this figure of merit. The approach is illustrated by its application to both simulated and experimental data.

  3. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that

  4. Tennessee Star-Quality Child Care Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Tennessee's Star-Quality Child Care Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  5. Oregon Child Care Quality Indicators Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Oregon's Child Care Quality Indicators Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  6. Category 1 external quality assessment program for serum creatinine.

    PubMed

    González-Lao, Elisabet; Díaz-Garzón, Jorge; Corte, Zoraida; Ricós, Carmen; Perich, Carmen; Álvarez, Virtudes; Simón, Margarita; Minchinela, Joana; García-Lario, José Vicente; Boned, Beatriz; Biosca, Carmen; Cava, Fernando; Fernández-Fernández, Pilar; Fernández-Calle, Pilar

    2017-03-01

    The Commission of Analytical Quality and the Committee of External Quality Programs of Spanish Society of Laboratory Medicine (SEQC) in collaboration with the Dutch Foundation for the Quality organized the first national category 1 External Quality Assessment Programs (EQAP) pilot study. The aim is to evaluate the standardization of serum creatinine measurements in the Spanish laboratories through a category 1 external quality assurance program with commutable material and reference method assigned values. A total of 87 Spanish laboratories were involved in this program in 2015. Each day a sample control was measured by duplicate during 6 consecutive days. Percentage deviations and coefficients of variation obtained were compared with quality specifications derived from biological variation. A total of 1044 creatinine results were obtained. Laboratories were coded in 11 different method-traceability combinations. Only enzymatic methods get all results within the acceptability limits. To participate in a category 1 EQAP is a valuable tool to assess the standardization degree in our country; a big effort should be made to promote laboratories to change their procedures and to use enzymatic creatinine methods, in order to achieve a satisfactory standardization degree for this important analyte.

  7. Patient-perceived hospital service quality: an empirical assessment.

    PubMed

    Pai, Yogesh P; Chary, Satyanarayana T; Pai, Rashmi Yogesh

    2018-02-12

    Purpose The purpose of this paper is to appraise Pai and Chary's (2016) conceptual framework for measuring patient-perceived hospital service quality (HSQ). Design/methodology/approach A structured questionnaire was used to obtain data from teaching, public and corporate hospital patients. Several tests were conducted to assess the instrument's reliability and validity. Pai and Chary's (2016) nine dimensions for measuring HSQ were examined in this paper. Findings The tests confirm that Pai and Chary's (2016) conceptual framework is reliable and valid. The study also establishes that the nine dimensions measure HSQ. Practical implications The framework empowers managers to assess service quality in any hospital settings, corporate, public and teaching, using an approach that is superior to the existing HSQ scales. Originality/value This paper helps researchers and practitioners to assess HSQ from patient perspectives in any hospital setting.

  8. A Smart Unconscious? Procedural Origins of Automatic Partner Attitudes in Marriage

    PubMed Central

    Murray, Sandra L.; Holmes, John G.; Pinkus, Rebecca T.

    2010-01-01

    The paper examines potential origins of automatic (i.e., unconscious) attitudes toward one’s marital partner. It tests the hypothesis that early experiences in conflict-of-interest situations predict one’s later automatic inclination to approach (or avoid) the partner. A longitudinal study linked daily experiences in conflict-of-interest situations in the initial months of new marriages to automatic evaluations of the partner assessed four years later using the Implicit Associations Test. The results revealed that partners who were initially (1) treated less responsively and (2) evidenced more self-protective and less connectedness-promoting “if-then” contingencies in their thoughts and behavior later evidenced less positive automatic partner attitudes. However, these factors did not predict changes in love, satisfaction, or explicit beliefs about the partner. The findings hint at the existence of a “smart” relationship unconscious that captures behavioral realities conscious reflection can miss. PMID:20526450

  9. Image quality classification for DR screening using deep learning.

    PubMed

    FengLi Yu; Jing Sun; Annan Li; Jun Cheng; Cheng Wan; Jiang Liu

    2017-07-01

    The quality of input images significantly affects the outcome of automated diabetic retinopathy (DR) screening systems. Unlike the previous methods that only consider simple low-level features such as hand-crafted geometric and structural features, in this paper we propose a novel method for retinal image quality classification (IQC) that performs computational algorithms imitating the working of the human visual system. The proposed algorithm combines unsupervised features from saliency map and supervised features coming from convolutional neural networks (CNN), which are fed to an SVM to automatically detect high quality vs poor quality retinal fundus images. We demonstrate the superior performance of our proposed algorithm on a large retinal fundus image dataset and the method could achieve higher accuracy than other methods. Although retinal images are used in this study, the methodology is applicable to the image quality assessment and enhancement of other types of medical images.

  10. Quality assurance in the production of pipe fittings by automatic laser-based material identification

    NASA Astrophysics Data System (ADS)

    Moench, Ingo; Peter, Laszlo; Priem, Roland; Sturm, Volker; Noll, Reinhard

    1999-09-01

    In plants of the chemical, nuclear and off-shore industry, application specific high-alloyed steels are used for pipe fittings. Mixing of different steel grades can lead to corrosion with severe consequential damages. Growing quality requirements and environmental responsibilities demand a 100% material control in the production of the pipe fittings. Therefore, LIFT, an automatic inspection machine, was developed to insure against any mix of material grades. LIFT is able to identify more than 30 different steel grades. The inspection method is based on Laser-Induced Breakdown Spectrometry (LIBS). An expert system, which can be easily trained and recalibrated, was developed for the data evaluation. The result of the material inspection is transferred to an external handling system via a PLC interface. The duration of the inspection process is 2 seconds. The graphical user interface was developed with respect to the requirements of an unskilled operator. The software is based on a realtime operating system and provides a safe and reliable operation. An interface for the remote maintenance by modem enables a fast operational support. Logged data are retrieved and evaluated. This is the basis for an adaptive improvement of the configuration of LIFT with respect to changing requirements in the production line. Within the first six months of routine operation, about 50000 pipe fittings were inspected.

  11. Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study

    NASA Astrophysics Data System (ADS)

    Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.

    2012-12-01

    Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64

  12. Health-related quality of life assessments in osteoarthritis during NSAID treatment.

    PubMed

    de Bock, G H; Hermans, J; van Marwijk, H W; Kaptein, A A; Mulder, J D

    1996-08-01

    There is some evidence that nabumetone (1000 mg once daily) in comparison with piroxicam (20 mg once daily) in patients with OA in general practice is associated with a lower incidence and less severe occurrence of stomach pain but with more withdrawals due to lack of efficacy. The aim of this analysis was to investigate whether these differences are reflected in health-related quality of life assessments. Patients (n = 198) included in this study were selected in general practice according to a protocol. The patients were randomized and treated for a period of six weeks. Clinical assessments were performed by the general practitioner (CP) during treatment. The Sickness Impact Profile (SIP), the Activities of Daily Living (ADL), and a pain questionnaire were filled out by the patients before and after treatment. As measured with the SIP, the ADL and the pain questionnaire, there were no significant differences between nabumetone and piroxicam. The correlations between (changes in) patient assessments and (changes in) clinical assessments were low. The differences between the two drugs regarding withdrawals and adverse events were not reflected by patient health-related quality of life assessments. There was a low correlation between patient health-related quality of life assessment and clinical assessments. To get a complete picture of the efficacy and safety of a drug, patient health-related quality of life assessments should be a part of a clinical trial.

  13. Control of the TSU 2-m automatic telescope

    NASA Astrophysics Data System (ADS)

    Eaton, Joel A.; Williamson, Michael H.

    2004-09-01

    Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.

  14. Guidance on Data Quality Assessment for Life Cycle Inventory ...

    EPA Pesticide Factsheets

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities within the LCA community have developed different methodologies to address and communicate the data quality of Life Cycle Inventory (LCI) data. Despite advances in this field, the LCA community is still plagued by the lack of reproducible data quality results and documentation. To address these issues, US EPA has created this guidance in order to further support reproducible life cycle inventory data quality results and to inform users of the proper application of the US EPA supported data quality system. The work for this report was begun in December 2014 and completed as of April 2016.The updated data quality system includes a novel approach to the pedigree matrix by addressing data quality at the flow and the process level. Flow level indicators address source reliability, temporal correlation, geographic correlation, technological correlation and data sampling methods. The process level indicators address the level of review the unit process has undergone and its completeness. This guidance is designed to be updatable as part of the LCA Research Center’s continuing commitment to data quality advancements. Life cycle assessment is increasingly being used as a tool to identify areas of

  15. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    ERIC Educational Resources Information Center

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  16. Automatic assessment of volume asymmetries applied to hip abductor muscles in patients with hip arthroplasty

    NASA Astrophysics Data System (ADS)

    Klemt, Christian; Modat, Marc; Pichat, Jonas; Cardoso, M. J.; Henckel, Joahnn; Hart, Alister; Ourselin, Sebastien

    2015-03-01

    Metal-on-metal (MoM) hip arthroplasties have been utilised over the last 15 years to restore hip function for 1.5 million patients worldwide. Althoug widely used, this hip arthroplasty releases metal wear debris which lead to muscle atrophy. The degree of muscle wastage differs across patients ranging from mild to severe. The longterm outcomes for patients with MoM hip arthroplasty are reduced for increasing degrees of muscle atrophy, highlighting the need to automatically segment pathological muscles. The automated segmentation of pathological soft tissues is challenging as these lack distinct boundaries and morphologically differ across subjects. As a result, there is no method reported in the literature which has been successfully applied to automatically segment pathological muscles. We propose the first automated framework to delineate severely atrophied muscles by applying a novel automated segmentation propagation framework to patients with MoM hip arthroplasty. The proposed algorithm was used to automatically quantify muscle wastage in these patients.

  17. School Indoor Air Quality Assessment and Program Implementation.

    ERIC Educational Resources Information Center

    Prill, R.; Blake, D.; Hales, D.

    This paper describes the effectiveness of a three-step indoor air quality (IAQ) program implemented by 156 schools in the states of Washington and Idaho during the 2000-2001 school year. An experienced IAQ/building science specialist conducted walk-through assessments at each school. These assessments documented deficiencies and served as an…

  18. Human visual system consistent quality assessment for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen

    2015-07-01

    Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.

  19. Water Quality Assessment of Ayeyarwady River in Myanmar

    NASA Astrophysics Data System (ADS)

    Thatoe Nwe Win, Thanda; Bogaard, Thom; van de Giesen, Nick

    2015-04-01

    Myanmar's socio-economic activities, urbanisation, industrial operations and agricultural production have increased rapidly in recent years. With the increase of socio-economic development and climate change impacts, there is an increasing threat on quantity and quality of water resources. In Myanmar, some of the drinking water coverage still comes from unimproved sources including rivers. The Ayeyarwady River is the main river in Myanmar draining most of the country's area. The use of chemical fertilizer in the agriculture, the mining activities in the catchment area, wastewater effluents from the industries and communities and other development activities generate pollutants of different nature. Therefore water quality monitoring is of utmost importance. In Myanmar, there are many government organizations linked to water quality management. Each water organization monitors water quality for their own purposes. The monitoring is haphazard, short term and based on individual interest and the available equipment. The monitoring is not properly coordinated and a quality assurance programme is not incorporated in most of the work. As a result, comprehensive data on the water quality of rivers in Myanmar is not available. To provide basic information, action is needed at all management levels. The need for comprehensive and accurate assessments of trends in water quality has been recognized. For such an assessment, reliable monitoring data are essential. The objective of our work is to set-up a multi-objective surface water quality monitoring programme. The need for a scientifically designed network to monitor the Ayeyarwady river water quality is obvious as only limited and scattered data on water quality is available. However, the set-up should also take into account the current socio-economic situation and should be flexible to adjust after first years of monitoring. Additionally, a state-of-the-art baseline river water quality sampling program is required which

  20. Automatic learning-based beam angle selection for thoracic IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary