Science.gov

Sample records for automatic quality assessment

  1. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  2. Automatic quality assessment protocol for MRI equipment.

    PubMed

    Bourel, P; Gibon, D; Coste, E; Daanen, V; Rousseau, J

    1999-12-01

    The authors have developed a protocol and software for the quality assessment of MRI equipment with a commercial test object. Automatic image analysis consists of detecting surfaces and objects, defining regions of interest, acquiring reference point coordinates and establishing gray level profiles. Signal-to-noise ratio, image uniformity, geometrical distortion, slice thickness, slice profile, and spatial resolution are checked. The results are periodically analyzed to evaluate possible drifts with time. The measurements are performed weekly on three MRI scanners made by the Siemens Company (VISION 1.5T, EXPERT 1.0T, and OPEN 0.2T). The results obtained for the three scanners over approximately 3.5 years are presented, analyzed, and compared. PMID:10619255

  3. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  4. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  5. Algorithm for Automatic Forced Spirometry Quality Assessment: Technological Developments

    PubMed Central

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  6. Automatic MeSH term assignment and quality assessment.

    PubMed Central

    Kim, W.; Aronson, A. R.; Wilbur, W. J.

    2001-01-01

    For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203

  7. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. PMID:24661606

  8. Automatic Assessment of Pathological Voice Quality Using Higher-Order Statistics in the LPC Residual Domain

    NASA Astrophysics Data System (ADS)

    Lee, Ji Yeoun; Hahn, Minsoo

    2010-12-01

    A preprocessing scheme based on linear prediction coefficient (LPC) residual is applied to higher-order statistics (HOSs) for automatic assessment of an overall pathological voice quality. The normalized skewness and kurtosis are estimated from the LPC residual and show statistically meaningful distributions to characterize the pathological voice quality. 83 voice samples of the sustained vowel /a/ phonation are used in this study and are independently assessed by a speech and language therapist (SALT) according to the grade of the severity of dysphonia of GRBAS scale. These are used to train and test classification and regression tree (CART). The best result is obtained using an optima l decision tree implemented by a combination of the normalized skewness and kurtosis, with an accuracy of 92.9%. It is concluded that the method can be used as an assessment tool, providing a valuable aid to the SALT during clinical evaluation of an overall pathological voice quality.

  9. Particle quality assessment and sorting for automatic and semiautomatic particle-picking techniques.

    PubMed

    Vargas, J; Abrishami, V; Marabini, R; de la Rosa-Trevín, J M; Zaldivar, A; Carazo, J M; Sorzano, C O S

    2013-09-01

    Three-dimensional reconstruction of biological specimens using electron microscopy by single particle methodologies requires the identification and extraction of the imaged particles from the acquired micrographs. Automatic and semiautomatic particle selection approaches can localize these particles, minimizing the user interaction, but at the cost of selecting a non-negligible number of incorrect particles, which can corrupt the final three-dimensional reconstruction. In this work, we present a novel particle quality assessment and sorting method that can separate most erroneously picked particles from correct ones. The proposed method is based on multivariate statistical analysis of a particle set that has been picked previously using any automatic or manual approach. The new method uses different sets of particle descriptors, which are morphology-based, histogram-based and signal to noise analysis based. We have tested our proposed algorithm with experimental data obtaining very satisfactory results. The algorithm is freely available as a part of the Xmipp 3.0 package [http://xmipp.cnb.csic.es]. PMID:23933392

  10. Polarization transformation as an algorithm for automatic generalization and quality assessment

    NASA Astrophysics Data System (ADS)

    Qian, Haizhong; Meng, Liqiu

    2007-06-01

    Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.

  11. Groupwise conditional random forests for automatic shape classification and contour quality assessment in radiotherapy planning.

    PubMed

    McIntosh, Chris; Svistoun, Igor; Purdie, Thomas G

    2013-06-01

    Radiation therapy is used to treat cancer patients around the world. High quality treatment plans maximally radiate the targets while minimally radiating healthy organs at risk. In order to judge plan quality and safety, segmentations of the targets and organs at risk are created, and the amount of radiation that will be delivered to each structure is estimated prior to treatment. If the targets or organs at risk are mislabelled, or the segmentations are of poor quality, the safety of the radiation doses will be erroneously reviewed and an unsafe plan could proceed. We propose a technique to automatically label groups of segmentations of different structures from a radiation therapy plan for the joint purposes of providing quality assurance and data mining. Given one or more segmentations and an associated image we seek to assign medically meaningful labels to each segmentation and report the confidence of that label. Our method uses random forests to learn joint distributions over the training features, and then exploits a set of learned potential group configurations to build a conditional random field (CRF) that ensures the assignment of labels is consistent across the group of segmentations. The CRF is then solved via a constrained assignment problem. We validate our method on 1574 plans, consisting of 17[Formula: see text] 579 segmentations, demonstrating an overall classification accuracy of 91.58%. Our results also demonstrate the stability of RF with respect to tree depth and the number of splitting variables in large data sets. PMID:23475352

  12. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  13. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  14. Back-and-Forth Methodology for Objective Voice Quality Assessment: From/to Expert Knowledge to/from Automatic Classification of Dysphonia

    NASA Astrophysics Data System (ADS)

    Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine

    2009-12-01

    This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.

  15. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  16. Automatization of Student Assessment Using Multimedia Technology.

    ERIC Educational Resources Information Center

    Taniar, David; Rahayu, Wenny

    Most use of multimedia technology in teaching and learning to date has emphasized the teaching aspect only. An application of multimedia in examinations has been neglected. This paper addresses how multimedia technology can be applied to the automatization of assessment, by proposing a prototype of a multimedia question bank, which is able to…

  17. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  18. Self-assessing target with automatic feedback

    SciTech Connect

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  19. Toward automatic recognition of high quality clinical evidence.

    PubMed

    Kilicoglu, Halil; Demner-Fushman, Dina; Rindflesch, Thomas C; Wilczynski, Nancy L; Haynes, R Brian

    2008-01-01

    Automatic methods for recognizing topically relevant documents supported by high quality research can assist clinicians in practicing evidence-based medicine. We approach the challenge of identifying articles with high quality clinical evidence as a binary classification problem. Combining predictions from supervised machine learning methods and using deep semantic features, we achieve 73.5% precision and 67% recall. PMID:18998881

  20. Automatic Test-Based Assessment of Programming: A Review

    ERIC Educational Resources Information Center

    Douce, Christopher; Livingstone, David; Orwell, James

    2005-01-01

    Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems,…

  1. Automatic assessment of ultrasound image usability

    NASA Astrophysics Data System (ADS)

    Valente, Luca; Funka-Lea, Gareth; Stoll, Jeffrey

    2011-03-01

    We present a novel and efficient approach for evaluating the quality of ultrasound images. Image acquisition is sensitive to skin contact and transducer orientation and requires both time and technical skill to be done properly. Images commonly suffer degradation due to acoustic shadows and signal attenuation, which present as regions of low signal intensity masking anatomical details and making the images partly or totally unusable. As ultrasound image acquisition and analysis becomes increasingly automated, it is beneficial to also automate the estimation of image quality. Towards this end, we present an algorithm that classifies regions of an image as usable or un-usable. Example applications of this algorithm include improved compounding of free-hand 3D ultrasound volumes by eliminating unusable data and improved automatic feature detection by limiting detection to only usable areas. The algorithm operates in two steps. First, it classifies the image into bright areas, likely to have image content, and dark areas, likely to have no content. Second, it classifies the dark areas into unusable (i.e. due to shadowing and/or signal loss) and usable (i.e. anatomically accurate dark regions, such as with a blood vessel) sub-areas. The classification considers several factors, including statistical information, gradient intensity and geometric properties such as shape and relative position. Relative weighting of factors was obtained through the training of a Support Vector Machine. Classification results for both human and phantom images are presented and compared to manual classifications. This method achieves 91% sensitivity and 91% specificity for usable regions of human scans.

  2. [Quality assessment in surgery].

    PubMed

    Espinoza G, Ricardo; Espinoza G, Juan Pablo

    2016-06-01

    This paper deals with quality from the perspective of structure, processes and indicators in surgery. In this specialty, there is a close relationship between effectiveness and quality. We review the definition and classification of surgical complications as an objective means of assessing quality. The great diversity of definitions and risk assessments of surgical complications hampered the comparisons of different surgical centers or the evaluation of a single center along time. We discuss the different factors associated with surgical risk and some of the predictive systems for complications and mortality. At the present time, standarized definitions and comparisons are carried out correcting for risk factors. Thus, indicators of mortality, complications, hospitalization length, postoperative quality of life and costs become comparable between different groups. The volume of procedures of a determinate center or surgeon as a quality indicator is emphasized. PMID:27598495

  3. Quality Assessment in Oncology

    SciTech Connect

    Albert, Jeffrey M.; Das, Prajnan

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  4. Automatic phonetogram recording supplemented with acoustical voice-quality parameters.

    PubMed

    Pabon, J P; Plomp, R

    1988-12-01

    A new method for automatic voice-quality registration is presented. The method is based on a technique called phonetography, which is the registration of the dynamic range of a voice as a function of fundamental frequency. In the new phonetogram-recording method fundamental frequency (Fo) and sound-pressure level (SPL) are automatically measured and represented in an XY-diagram. Three additional acoustical voice-quality parameters are measured simultaneously with Fo and SPL: (a) jitter in the Fo as a measure for roughness, (b) the SPL difference between the 0-1.5 kHz and the 1.5-5 kHz bands as a measure for sharpness, and (c) the vocal-noise level above 5 kHz as a measure for breathiness. With this method, the voice-quality parameter values, which may change substantially as a function of Fo and SPL, are pinned to a reference position in the patient's total vocal range. Seen as a reference tool, the phonetogram opens the possibility for a more meaningful comparison of voice-quality data. Some examples, demonstrating the dependence of the chosen quality parameters on Fo and SPL are given. PMID:3230899

  5. The Educational Quality Assessment

    ERIC Educational Resources Information Center

    Dinsmore, Peter; And Others

    1976-01-01

    Pennsylvania's Educational Quality Assessment (EQA) program is discussed in terms of its historical background, ACLU objections to its alleged infringements of individual rights, and reactions of students who have taken it. Journal is available from Ritter Hall, Fourth Floor, Temple University, Philadelphia, Pa. 19122. (AV)

  6. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  7. An automatic method for CASP9 free modeling structure prediction assessment

    PubMed Central

    Cong, Qian; Kinch, Lisa N.; Pei, Jimin; Shi, Shuoyong; Grishin, Vyacheslav N.; Li, Wenlin; Grishin, Nick V.

    2011-01-01

    Motivation: Manual inspection has been applied to and is well accepted for assessing critical assessment of protein structure prediction (CASP) free modeling (FM) category predictions over the years. Such manual assessment requires expertise and significant time investment, yet has the problems of being subjective and unable to differentiate models of similar quality. It is beneficial to incorporate the ideas behind manual inspection to an automatic score system, which could provide objective and reproducible assessment of structure models. Results: Inspired by our experience in CASP9 FM category assessment, we developed an automatic superimposition independent method named Quality Control Score (QCS) for structure prediction assessment. QCS captures both global and local structural features, with emphasis on global topology. We applied this method to all FM targets from CASP9, and overall the results showed the best agreement with Manual Inspection Scores among automatic prediction assessment methods previously applied in CASPs, such as Global Distance Test Total Score (GDT_TS) and Contact Score (CS). As one of the important components to guide our assessment of CASP9 FM category predictions, this method correlates well with other scoring methods and yet is able to reveal good-quality models that are missed by GDT_TS. Availability: The script for QCS calculation is available at http://prodata.swmed.edu/QCS/. Contact: grishin@chop.swmed.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994223

  8. Joint Statement: Quality Assessment and Quality Audit.

    ERIC Educational Resources Information Center

    Scottish Higher Education Funding Council, Edinburgh.

    This document sets out the respective responsibilities of the Scottish Higher Education Funding Council (SHEFC) and the Higher Education Quality Council (HEQC) as they currently stand in the field of higher education quality assurance. The SHEFC and the HEQC are both agencies that fulfill legislatively mandated quality assessment and control…

  9. On Automatic Assessment and Conceptual Understanding

    ERIC Educational Resources Information Center

    Rasila, Antti; Malinen, Jarmo; Tiitu, Hannu

    2015-01-01

    We consider two complementary aspects of mathematical skills, i.e. "procedural fluency" and "conceptual understanding," from a point of view that is related to modern e-learning environments and computer-based assessment. Pedagogical background of teaching mathematics is discussed, and it is proposed that the traditional book…

  10. Automatic Summary Assessment for Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh

    2009-01-01

    Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…

  11. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  12. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  13. On the Use of Resubmissions in Automatic Assessment Systems

    ERIC Educational Resources Information Center

    Karavirta, Ville; Korhonen, Ari; Malmi, Lauri

    2006-01-01

    Automatic assessment systems generally support immediate grading and response on learners' submissions. They also allow learners to consider the feedback, revise, and resubmit their solutions. Several strategies exist to implement the resubmission policy. The ultimate goal, however, is to improve the learning outcomes, and thus the strategies…

  14. Automatic personality assessment through social media language.

    PubMed

    Park, Gregory; Schwartz, H Andrew; Eichstaedt, Johannes C; Kern, Margaret L; Kosinski, Michal; Stillwell, David J; Ungar, Lyle H; Seligman, Martin E P

    2015-06-01

    Language use is a psychologically rich, stable individual difference with well-established correlations to personality. We describe a method for assessing personality using an open-vocabulary analysis of language from social media. We compiled the written language from 66,732 Facebook users and their questionnaire-based self-reported Big Five personality traits, and then we built a predictive model of personality based on their language. We used this model to predict the 5 personality factors in a separate sample of 4,824 Facebook users, examining (a) convergence with self-reports of personality at the domain- and facet-level; (b) discriminant validity between predictions of distinct traits; (c) agreement with informant reports of personality; (d) patterns of correlations with external criteria (e.g., number of friends, political attitudes, impulsiveness); and (e) test-retest reliability over 6-month intervals. Results indicated that language-based assessments can constitute valid personality measures: they agreed with self-reports and informant reports of personality, added incremental validity over informant reports, adequately discriminated between traits, exhibited patterns of correlations with external criteria similar to those found with self-reported personality, and were stable over 6-month intervals. Analysis of predictive language can provide rich portraits of the mental life associated with traits. This approach can complement and extend traditional methods, providing researchers with an additional measure that can quickly and cheaply assess large groups of participants with minimal burden. PMID:25365036

  15. Towards A Clinical Tool For Automatic Intelligibility Assessment

    PubMed Central

    Berisha, Visar; Utianski, Rene; Liss, Julie

    2014-01-01

    An important, yet under-explored, problem in speech processing is the automatic assessment of intelligibility for pathological speech. In practice, intelligibility assessment is often done through subjective tests administered by speech pathologists; however research has shown that these tests are inconsistent, costly, and exhibit poor reliability. Although some automatic methods for intelligibility assessment for telecommunications exist, research specific to pathological speech has been limited. Here, we propose an algorithm that captures important multi-scale perceptual cues shown to correlate well with intelligibility. Nonlinear classifiers are trained at each time scale and a final intelligibility decision is made using ensemble learning methods from machine learning. Preliminary results indicate a marked improvement in intelligibility assessment over published baseline results. PMID:25004985

  16. Towards A Clinical Tool For Automatic Intelligibility Assessment.

    PubMed

    Berisha, Visar; Utianski, Rene; Liss, Julie

    2013-01-01

    An important, yet under-explored, problem in speech processing is the automatic assessment of intelligibility for pathological speech. In practice, intelligibility assessment is often done through subjective tests administered by speech pathologists; however research has shown that these tests are inconsistent, costly, and exhibit poor reliability. Although some automatic methods for intelligibility assessment for telecommunications exist, research specific to pathological speech has been limited. Here, we propose an algorithm that captures important multi-scale perceptual cues shown to correlate well with intelligibility. Nonlinear classifiers are trained at each time scale and a final intelligibility decision is made using ensemble learning methods from machine learning. Preliminary results indicate a marked improvement in intelligibility assessment over published baseline results. PMID:25004985

  17. Automatic quality control in clinical (1) H MRSI of brain cancer.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Knecht, Urspeter; Wiest, Roland; Slotboom, Johannes

    2016-05-01

    MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented

  18. Portfolio Assessment and Quality Teaching

    ERIC Educational Resources Information Center

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  19. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. PMID:24148491

  20. Automatic ECG quality scoring methodology: mimicking human annotators.

    PubMed

    Johannesen, Lars; Galeotti, Loriano

    2012-09-01

    An algorithm to determine the quality of electrocardiograms (ECGs) can enable inexperienced nurses and paramedics to record ECGs of sufficient diagnostic quality. Previously, we proposed an algorithm for determining if ECG recordings are of acceptable quality, which was entered in the PhysioNet Challenge 2011. In the present work, we propose an improved two-step algorithm, which first rejects ECGs with macroscopic errors (signal absent, large voltage shifts or saturation) and subsequently quantifies the noise (baseline, powerline or muscular noise) on a continuous scale. The performance of the improved algorithm was evaluated using the PhysioNet Challenge database (1500 ECGs rated by humans for signal quality). We achieved a classification accuracy of 92.3% on the training set and 90.0% on the test set. The improved algorithm is capable of detecting ECGs with macroscopic errors and giving the user a score of the overall quality. This allows the user to assess the degree of noise and decide if it is acceptable depending on the purpose of the recording. PMID:22902927

  1. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    NASA Astrophysics Data System (ADS)

    Gu, Lingyun; Harris, John G.; Shrivastav, Rahul; Sapienza, Christine

    2005-12-01

    Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW) and the Itakura-Saito (IS) distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  2. Solar Radiation Empirical Quality Assessment

    Energy Science and Technology Software Center (ESTSC)

    1994-03-01

    The SERIQC1 subroutine performs quality assessment of one, two, or three-component solar radiation data (global horizontal, direct normal, and diffuse horizontal) obtained from one-minute to one-hour integrations. Included in the package is the QCFIT tool to derive expected values from historical data, and the SERIQC1 subroutine to assess the quality of measurement data.

  3. WATER QUALITY ASSESSMENT METHODOLOGY (WQAM)

    EPA Science Inventory

    The Water Quality Assessment Methodology (WQAM) is a screening procedure for toxic and conventional pollutants in surface and ground waters and is a collection of formulas, tables, and graphs that planners can use for preliminary assessment of surface and ground water quality in ...

  4. MRI-Guided Target Motion Assessment using Dynamic Automatic Segmentation

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.

    Motion significantly impacts the radiotherapy process and represents one of the persisting problems in treatment delivery. In order to improve motion management techniques and implement future image guided radiotherapy tools such as MRI-guidance, automatic segmentation algorithms hold great promise. Such algorithms are attractive due to their direct measurement accuracy, speed, and ability to assess motion trajectories for daily treatment plan modifications. We developed and optimized an automatic segmentation technique to enable target tracking using MR cines, 4D-MRI, and 4D-CT. This algorithm overcomes weaknesses in automatic contouring such as lack of image contrast, subjectivity, slow speed, and lack of differentiating feature vectors by the use of morphological processing. The software is enhanced with predictive parameter capabilities and dynamic processing. The 4D-MRI images are acquired by applying a retrospective phase binning approach to radially-acquired MR image projections. The quantification of motion is validated with a motor phantom undergoing a known trajectory in 4D-CT, 4D-MRI, and in MR cines from the ViewRay MR-Guided RT system. In addition, a clinical case study demonstrates wide-reaching implications of the software to segment lesions in the brain and lung as well as critical structures such as the liver. Auto-segmentation results from MR cines of canines correlate well with manually drawn contours, both in terms of Dice similarity coefficient and agreement of extracted motion trajectories.

  5. A new quality assessment and improvement system for print media

    NASA Astrophysics Data System (ADS)

    Liu, Mohan; Konya, Iuliu; Nandzik, Jan; Flores-Herr, Nicolas; Eickeler, Stefan; Ndjiki-Nya, Patrick

    2012-12-01

    Print media collections of considerable size are held by cultural heritage organizations and will soon be subject to digitization activities. However, technical content quality management in digitization workflows strongly relies on human monitoring. This heavy human intervention is cost intensive and time consuming, which makes automization mandatory. In this article, a new automatic quality assessment and improvement system is proposed. The digitized source image and color reference target are extracted from the raw digitized images by an automatic segmentation process. The target is evaluated by a reference-based algorithm. No-reference quality metrics are applied to the source image. Experimental results are provided to illustrate the performance of the proposed system. We show that it features a good performance in the extraction as well as in the quality assessment step compared to the state-of-the-art. The impact of efficient and dedicated quality assessors on the optimization step is extensively documented.

  6. Assessing Air Quality.

    ERIC Educational Resources Information Center

    Bloomfield, Molly

    2000-01-01

    Introduces the Science and Math Investigative Learning Experiences (SMILE) program. Presents an air quality problem as an example of an integrated challenge problem activity developed by the SMILE program. Explains the process of challenge problems and provides a list of the National Science Education Standards addressed by challenge problems.…

  7. Irrigation water quality assessments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  8. CART IV: improving automatic camouflage assessment with assistance methods

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2010-04-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007, SPIE 2008 and SPIE 2009 [1], [2], [3]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors. The conspicuity of camouflaged objects due to their movement can be assessed with a purpose-built processing method named MTI snail track algorithm. This paper presents the enhancements over the recent year and addresses procedures to assist the camouflage assessment of moving objects for image data material with strong noise or image artefacts. This extends the evaluation methods significantly to a broader application range. For example, some noisy infrared image data material can be evaluated for the first time by applying the presented methods which fathom the correlations between camouflage assessment, MTI (moving target indication) and dedicated noise filtering.

  9. Institutional Consequences of Quality Assessment

    ERIC Educational Resources Information Center

    Joao Rosa, Maria; Tavares, Diana; Amaral, Alberto

    2006-01-01

    This paper analyses the opinions of Portuguese university rectors and academics on the quality assessment system and its consequences at the institutional level. The results obtained show that university staff (rectors and academics, with more of the former than the latter) held optimistic views of the positive consequences of quality assessment…

  10. Quality assessment of urban environment

    NASA Astrophysics Data System (ADS)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  11. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  12. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  13. Educational Quality Assessment. 1986 Data.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Div. of Educational Testing and Evaluation.

    This manual contains the statistics generated from the Pennsylvania Quality Education Assessment (QEA) administered in 1986. The assessment was to evaluate the achievement of certain educational objectives in grades 4, 6, 7, 9, and 11 in the public schools. The tests were in the following areas: reading comprehension, writing skills, mathematics,…

  14. Objective view synthesis quality assessment

    NASA Astrophysics Data System (ADS)

    Conze, Pierre-Henri; Robert, Philippe; Morin, Luce

    2012-03-01

    View synthesis brings geometric distortions which are not handled efficiently by existing image quality assessment metrics. Despite the widespread of 3-D technology and notably 3D television (3DTV) and free-viewpoints television (FTV), the field of view synthesis quality assessment has not yet been widely investigated and new quality metrics are required. In this study, we propose a new full-reference objective quality assessment metric: the View Synthesis Quality Assessment (VSQA) metric. Our method is dedicated to artifacts detection in synthesized view-points and aims to handle areas where disparity estimation may fail: thin objects, object borders, transparency, variations of illumination or color differences between left and right views, periodic objects... The key feature of the proposed method is the use of three visibility maps which characterize complexity in terms of textures, diversity of gradient orientations and presence of high contrast. Moreover, the VSQA metric can be defined as an extension of any existing 2D image quality assessment metric. Experimental tests have shown the effectiveness of the proposed method.

  15. Salient motion features for video quality assessment.

    PubMed

    Ćulibrk, Dubravko; Mirković, Milan; Zlokolica, Vladimir; Pokrić, Maja; Crnojević, Vladimir; Kukolj, Dragan

    2011-04-01

    Design of algorithms that are able to estimate video quality as perceived by human observers is of interest for a number of applications. Depending on the video content, the artifacts introduced by the coding process can be more or less pronounced and diversely affect the quality of videos, as estimated by humans. While it is well understood that motion affects both human attention and coding quality, this relationship has only recently started gaining attention among the research community, when video quality assessment (VQA) is concerned. In this paper, the effect of calculating several objective measure features, related to video coding artifacts, separately for salient motion and other regions of the frames of the sequence is examined. In addition, we propose a new scheme for quality assessment of coded video streams, which takes into account salient motion. Standardized procedure has been used to calculate the Mean Opinion Score (MOS), based on experiments conducted with a group of non-expert observers viewing standard definition (SD) sequences. MOS measurements were taken for nine different SD sequences, coded using MPEG-2 at five different bit-rates. Eighteen different published approaches related to measuring the amount of coding artifacts objectively on a single-frame basis were implemented. Additional features describing the intensity of salient motion in the frames, as well as the intensity of coding artifacts in the salient motion regions were proposed. Automatic feature selection was performed to determine the subset of features most correlated to video quality. The results show that salient-motion-related features enhance prediction and indicate that the presence of blocking effect artifacts and blurring in the salient regions and variance and intensity of temporal changes in non-salient regions influence the perceived video quality. PMID:20876020

  16. Automatic graphene transfer system for improved material quality and efficiency.

    PubMed

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  17. Automatic graphene transfer system for improved material quality and efficiency

    NASA Astrophysics Data System (ADS)

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-02-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications.

  18. Automatic graphene transfer system for improved material quality and efficiency

    PubMed Central

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  19. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    SciTech Connect

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  20. Data Quality Assessment for Maritime Situation Awareness

    NASA Astrophysics Data System (ADS)

    Iphar, C.; Napoli, A.; Ray, C.

    2015-08-01

    The Automatic Identification System (AIS) initially designed to ensure maritime security through continuous position reports has been progressively used for many extended objectives. In particular it supports a global monitoring of the maritime domain for various purposes like safety and security but also traffic management, logistics or protection of strategic areas, etc. In this monitoring, data errors, misuse, irregular behaviours at sea, malfeasance mechanisms and bad navigation practices have inevitably emerged either by inattentiveness or voluntary actions in order to circumvent, alter or exploit such a system in the interests of offenders. This paper introduces the AIS system and presents vulnerabilities and data quality assessment for decision making in maritime situational awareness cases. The principles of a novel methodological approach for modelling, analysing and detecting these data errors and falsification are introduced.

  1. Assessment of automatic ligand building in ARP/wARP

    SciTech Connect

    Evrard, Guillaume X. Langer, Gerrit G.; Lamzin, Victor S.

    2007-01-01

    The performance of the ligand-building module of the ARP/wARP software suite is assessed through a large-scale test on known protein–ligand complexes. The results provide a detailed benchmark and guidelines for future improvements. The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein–ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.

  2. A routine quality assurance test for CT automatic exposure control systems.

    PubMed

    Iball, Gareth R; Moore, Alexis C; Crawford, Elizabeth J

    2016-01-01

    The study purpose was to develop and validate a quality assurance test for CT automatic exposure control (AEC) systems based on a set of nested polymethylmethacrylate CTDI phantoms. The test phantom was created by offsetting the 16 cm head phantom within the 32 cm body annulus, thus creating a three part phantom. This was scanned at all acceptance, routine, and some nonroutine quality assurance visits over a period of 45 months, resulting in 115 separate AEC tests on scanners from four manufacturers. For each scan the longitudinal mA modulation pattern was generated and measurements of image noise were made in two annular regions of interest. The scanner displayed CTDIvol and DLP were also recorded. The impact of a range of AEC configurations on dose and image quality were assessed at acceptance testing. For systems that were tested more than once, the percentage of CTDIvol values exceeding 5%, 10%, and 15% deviation from baseline was 23.4%, 12.6%, and 8.1% respectively. Similarly, for the image noise data, deviations greater than 2%, 5%, and 10% from baseline were 26.5%, 5.9%, and 2%, respectively. The majority of CTDIvol and noise deviations greater than 15% and 5%, respectively, could be explained by incorrect phantom setup or protocol selection. Barring these results, CTDIvol deviations of greater than 15% from baseline were found in 0.9% of tests and noise deviations greater than 5% from baseline were found in 1% of tests. The phantom was shown to be sensitive to changes in AEC setup, including the use of 3D, longitudinal or rotational tube current modulation. This test methodology allows for continuing performance assessment of CT AEC systems, and we recommend that this test should become part of routine CT quality assurance programs. Tolerances of ± 15% for CTDIvol and ± 5% for image noise relative to baseline values should be used. PMID:27455490

  3. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  4. EQA: Educational Quality Assessment. Commentary.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Div. of Educational Testing and Evaluation.

    In response to a prevailing demand for better quality education in public schools in Pennsylvania, procedures were developed to measure the adequacy and the efficiency of the educational programs in public schools. The objectives currently assessed are in the following areas: communication skills (reading and writing), mathematics, self-esteem,…

  5. Quality assessment tools add value.

    PubMed

    Paul, L

    1996-10-01

    The rapid evolution of the health care marketplace can be expected to continue as we move closer to the 21st Century. Externally-imposed pressures for cost reduction will increasingly be accompanied by pressure within health care organizations as risk-sharing reimbursement arrangements become more commonplace. Competitive advantage will be available to those organizations that can demonstrate objective value as defined by the cost-quality equation. The tools an organization chooses to perform quality assessment will be an important factor in its ability to demonstrate such value. Traditional quality assurance will in all likelihood continue, but the extent to which quality improvement activities are adopted by the culture of an organization may determine its ability to provide objective evidence of better health status outcomes. PMID:10162486

  6. Assessing risks to ecosystem quality

    SciTech Connect

    Barnthouse, L.W.

    1995-12-31

    Ecosystems are not organisms. Because ecosystems do not reproduce, grow old or sick, and die, the term ecosystem health is somewhat misleading and perhaps should not be used. A more useful concept is ``ecosystem quality,`` which denotes a set of desirable ecosystem characteristics defined in terms of species composition, productivity, size/condition of specific populations, or other measurable properties. The desired quality of an ecosystem may be pristine, as in a nature preserve, or highly altered by man, as in a managed forest or navigational waterway. ``Sustainable development`` implies that human activities that influence ecosystem quality should be managed so that high-quality ecosystems are maintained for future generations. In sustainability-based environmental management, the focus is on maintaining or improving ecosystem quality, not on restricting discharges or requiring particular waste treatment technologies. This approach requires management of chemical impacts to be integrated with management of other sources of stress such as erosion, eutrophication, and direct human exploitation. Environmental scientists must (1) work with decision makers and the public to define ecosystem quality goals, (2) develop corresponding measures of ecosystem quality, (3) diagnose causes for departures from desired states, and (4) recommend appropriate restoration actions, if necessary. Environmental toxicology and chemical risk assessment are necessary for implementing the above framework, but they are clearly not sufficient. This paper reviews the state-of-the science relevant to sustaining the quality of aquatic ecosystems. Using the specific example of a reservoir in eastern Tennessee, the paper attempts to define roles for ecotoxicology and risk assessment in each step of the management process.

  7. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  8. SIMULATING LOCAL DENSE AREAS USING PMMA TO ASSESS AUTOMATIC EXPOSURE CONTROL IN DIGITAL MAMMOGRAPHY.

    PubMed

    Bouwman, R W; Binst, J; Dance, D R; Young, K C; Broeders, M J M; den Heeten, G J; Veldkamp, W J H; Bosmans, H; van Engen, R E

    2016-06-01

    Current digital mammography (DM) X-ray systems are equipped with advanced automatic exposure control (AEC) systems, which determine the exposure factors depending on breast composition. In the supplement of the European guidelines for quality assurance in breast cancer screening and diagnosis, a phantom-based test is included to evaluate the AEC response to local dense areas in terms of signal-to-noise ratio (SNR). This study evaluates the proposed test in terms of SNR and dose for four DM systems. The glandular fraction represented by the local dense area was assessed by analytic calculations. It was found that the proposed test simulates adipose to fully glandular breast compositions in attenuation. The doses associated with the phantoms were found to match well with the patient dose distribution. In conclusion, after some small adaptations, the test is valuable for the assessment of the AEC performance in terms of both SNR and dose. PMID:26977073

  9. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  10. Network Design and Quality Checks in Automatic Orientation of Close-Range Photogrammetric Blocks

    PubMed Central

    Dall’Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  11. Blind image quality assessment through anisotropy.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2007-12-01

    We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Rényi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio. PMID:18059913

  12. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  13. APPLICATION OF AUTOMATIC DIFFERENTIATION FOR STUDYING THE SENSITIVITY OF NUMERICAL ADVECTION SCHEMES IN AIR QUALITY MODELS

    EPA Science Inventory

    In any simulation model, knowing the sensitivity of the system to the model parameters is of utmost importance. s part of an effort to build a multiscale air quality modeling system for a high performance computing and communication (HPCC) environment, we are exploring an automat...

  14. External quality assessment: best practice.

    PubMed

    James, David; Ames, Darren; Lopez, Berenice; Still, Rachel; Simpson, Wiliam; Twomey, Patrick

    2014-08-01

    There is a requirement for accredited laboratories to participate in external quality assessment (EQA) schemes, but there is wide variation in understanding as to what is required by the laboratories and scheme providers in fulfilling this. This is not helped by a diversity of language used in connection with EQA; Proficiency testing (PT), EQA schemes, and EQA programmes, each of which have different meanings and offerings in the context of improving laboratory quality. We examine these differences, and identify what factors are important in supporting quality within a clinical laboratory and what should influence the choice of EQA programme. Equally as important is how EQA samples are handled within the laboratory, and how the information provided by the EQA programme is used. EQA programmes are a key element of a laboratory's quality assurance framework, but laboratories should have an understanding of what their EQA programmes are capable of demonstrating, how they should be used within the laboratory, and how they support quality. EQA providers should be clear as to what type of programme they provide - PT, EQA Scheme or EQA Programme. PMID:24621574

  15. Quality of Life Effects of Automatic External Defibrillators in the Home: Results from the Home Automatic External Defibrillator Trial (HAT)

    PubMed Central

    Mark, Daniel B.; Anstrom, Kevin J.; McNulty, Steven E.; Flaker, Greg C.; Tonkin, Andrew M.; Smith, Warren M.; Toff, William D.; Dorian, Paul; Clapp-Channing, Nancy E.; Anderson, Jill; Johnson, George; Schron, Eleanor B.; Poole, Jeanne E.; Lee, Kerry L.; Bardy, Gust H.

    2010-01-01

    Background Public access automatic external defibrillators (AEDs) can save lives, but most deaths from out-of-hospital sudden cardiac arrest occur at home. The Home Automatic External Defibrillator Trial (HAT) found no survival advantage for adding a home AED to cardiopulmonary resuscitation (CPR) training for 7001 patients with a prior anterior wall myocardial infarction. Quality of life (QOL) outcomes for both the patient and spouse/companion were secondary endpoints. Methods A subset of 1007 study patients and their spouse/companions was randomly selected for ascertainment of QOL by structured interview at baseline and 12 and 24 months following enrollment. The primary QOL measures were the Medical Outcomes Study 36-Item Short-Form (SF-36) psychological well-being (reflecting anxiety and depression) and vitality (reflecting energy and fatigue) subscales. Results For patients and spouse/companions, the psychological well-being and vitality scales did not differ significantly between those randomly assigned an AED plus CPR training and controls who received CPR training only. None of the other QOL measures collected showed a clinically and statistically significant difference between treatment groups. Patients in the AED group were more likely to report being extremely or quite a bit reassured by their treatment assignment. Spouse/companions in the AED group reported being less often nervous about the possibility of using AED/CPR treatment than those in the CPR group. Conclusions Adding access to a home AED to CPR training did not affect quality of life either for patients with a prior anterior myocardial infarction or their spouse/companion but did provide more reassurance to the patients without increasing anxiety for spouse/companions. PMID:20362722

  16. Orion Entry Handling Qualities Assessments

    NASA Technical Reports Server (NTRS)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  17. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  18. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  19. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  20. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  1. Carbon Nanotube Material Quality Assessment

    NASA Technical Reports Server (NTRS)

    Yowell, Leonard; Arepalli, Sivaram; Sosa, Edward; Niolaev, Pavel; Gorelik, Olga

    2006-01-01

    The nanomaterial activities at NASA Johnson Space Center focus on carbon nanotube production, characterization and their applications for aerospace systems. Single wall carbon nanotubes are produced by arc and laser methods. Characterization of the nanotube material is performed using the NASA JSC protocol developed by combining analytical techniques of SEM, TEM, UV-VIS-NIR absorption, Raman, and TGA. A possible addition of other techniques such as XPS, and ICP to the existing protocol will be discussed. Changes in the quality of the material collected in different regions of the arc and laser production chambers is assessed using the original JSC protocol. The observed variations indicate different growth conditions in different regions of the production chambers.

  2. Assessing the Need for Referral in Automatic Diabetic Retinopathy Detection.

    PubMed

    Pires, Ramon; Jelinek, Herbert F; Wainer, Jacques; Goldenstein, Siome; Valle, Eduardo; Rocha, Anderson

    2013-12-01

    Emerging technologies in health care aim at reducing unnecessary visits to medical specialists, minimizing overall cost of treatment and optimizing the number of patients seen by each doctor. This paper explores image recognition for the screening of diabetic retinopathy, a complication of diabetes that can lead to blindness if not discovered in its initial stages. Many previous reports on DR imaging focus on the segmentation of the retinal image, on quality assessment, and on the analysis of presence of DR-related lesions. Although this study has advanced the detection of individual DR lesions from retinal images, the simple presence of any lesion is not enough to decide on the need for referral of a patient. Deciding if a patient should be referred to a doctor is an essential requirement for the deployment of an automated screening tool for rural and remote communities. We introduce an algorithm to make that decision based on the fusion of results by metaclassification. The input of the metaclassifier is the output of several lesion detectors, creating a powerful high-level feature representation for the retinal images. We explore alternatives for the bag-of-visual-words (BoVW)-based lesion detectors, which critically depends on the choices of coding and pooling the low-level local descriptors. The final classification approach achieved an area under the curve of 93.4% using SOFT-MAX BoVW (soft-assignment coding/max pooling), without the need of normalizing the high-level feature vector of scores. PMID:23963192

  3. Towards Quality Assessment in an EFL Programme

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  4. Perceptual Quality Assessment for Multi-Exposure Image Fusion.

    PubMed

    Ma, Kede; Zeng, Kai; Wang, Zhou

    2015-11-01

    Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. PMID:26068317

  5. Healthcare quality maturity assessment model based on quality drivers.

    PubMed

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter. PMID:27120510

  6. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    ERIC Educational Resources Information Center

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of…

  7. Assessment and Quality Social Studies

    ERIC Educational Resources Information Center

    Savage, Tom V.

    2003-01-01

    Those anonymous individuals who develop high-stakes tests by which educational quality is measured exercise great influence in defining educational quality. In this article, the author examines the impact of high-stakes testing on the welfare of the children and the quality of social studies instruction. He presents the benefits and drawbacks of…

  8. Automatic quality improvement reports in the intensive care unit: One step closer toward meaningful use

    PubMed Central

    Dziadzko, Mikhail A; Thongprayoon, Charat; Ahmed, Adil; Tiong, Ing C; Li, Man; Brown, Daniel R; Pickering, Brian W; Herasevich, Vitaly

    2016-01-01

    AIM: To examine the feasibility and validity of electronic generation of quality metrics in the intensive care unit (ICU). METHODS: This minimal risk observational study was performed at an academic tertiary hospital. The Critical Care Independent Multidisciplinary Program at Mayo Clinic identified and defined 11 key quality metrics. These metrics were automatically calculated using ICU DataMart, a near-real time copy of all ICU electronic medical record (EMR) data. The automatic report was compared with data from a comprehensive EMR review by a trained investigator. Data was collected for 93 randomly selected patients admitted to the ICU during April 2012 (10% of admitted adult population). This study was approved by the Mayo Clinic Institution Review Board. RESULTS: All types of variables needed for metric calculations were found to be available for manual and electronic abstraction, except information for availability of free beds for patient-specific time-frames. There was 100% agreement between electronic and manual data abstraction for ICU admission source, admission service, and discharge disposition. The agreement between electronic and manual data abstraction of the time of ICU admission and discharge were 99% and 89%. The time of hospital admission and discharge were similar for both the electronically and manually abstracted datasets. The specificity of the electronically-generated report was 93% and 94% for invasive and non-invasive ventilation use in the ICU. One false-positive result for each type of ventilation was present. The specificity for ICU and in-hospital mortality was 100%. Sensitivity was 100% for all metrics. CONCLUSION: Our study demonstrates excellent accuracy of electronically-generated key ICU quality metrics. This validates the feasibility of automatic metric generation. PMID:27152259

  9. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  10. Automatic assessment of scintmammographic images using a novelty filter.

    PubMed Central

    Costa, M.; Moura, L.

    1995-01-01

    99mTc-sestamibi scintmammograms provide a powerful non-invasive means for detecting breast cancer at early stages. This paper describes an automatic method for detecting breast tumors in such mammograms. The proposed method not only detects tumors but also classifies non-tumor images as "normal" or "diffuse increased uptake" mammograms. The detection method makes use of Kohonen's "novelty filter". In this technique an orthogonal vector basis is created from a normal set of images. Test images presented to the detection method are described as a linear combination of the images in the vector basis. Assuming that the image basis is representative of normal patterns, then it can be expected that there should be no major differences between a normal test image and its corresponding linear combination image. However, if the test image presents an abnormal pattern, then it is expected that the "abnormalities" will show as the difference between the original test image and the image built from the vector basis. In other words, the existing abnormality cannot be explained by the set of normal images and comes up as a "novelty." An important part of the proposed method are the steps taken for standardizing images before they can be used as part of the vector basis. Standardization is the keystone to the success of the proposed method, as the novelty filter is very sensitive to changes in shape and alignment. Images Figure 1 Figure 2 PMID:8563342

  11. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    NASA Astrophysics Data System (ADS)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  12. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  13. Assessing Quality in Home Visiting Programs

    ERIC Educational Resources Information Center

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  14. SERI QC Solar Data Quality Assessment Software

    SciTech Connect

    1994-12-31

    SERI QC is a mathematical software package that assesses the quality of solar radiation data. The SERI QC software is a function written in the C programming language. IT IS NOT A STANDALONE SOFTWARE APPLICATION. The user must write the calling application that requires quality assessment of solar data. The C function returns data quality flags to the calling program. A companion program, QCFIT, is a standalone Windows application that provides support files for the SERI QC function (data quality boundaries). The QCFIT software can also be used as an analytical tool for visualizing solar data quality independent of the SERI QC function.

  15. SERI QC Solar Data Quality Assessment Software

    Energy Science and Technology Software Center (ESTSC)

    1994-12-31

    SERI QC is a mathematical software package that assesses the quality of solar radiation data. The SERI QC software is a function written in the C programming language. IT IS NOT A STANDALONE SOFTWARE APPLICATION. The user must write the calling application that requires quality assessment of solar data. The C function returns data quality flags to the calling program. A companion program, QCFIT, is a standalone Windows application that provides support files for themore » SERI QC function (data quality boundaries). The QCFIT software can also be used as an analytical tool for visualizing solar data quality independent of the SERI QC function.« less

  16. Automatic Severity Assessment of Dysarthria using State-Specific Vectors.

    PubMed

    Sriranjani, R; Umesh, S; Reddy, M Ramasubba

    2015-01-01

    In this paper, a novel approach to assess the severity of the dysarthria using state-specific vector (SSV) of phone-cluster adaptive training (phone-CAT) acoustic modeling technique is proposed. The dominant component of the SSV represents the actual pronunciations of a speaker. Comparing the dominant component for unimpaired and each dysarthric speaker, a phone confusion matrix is formed. The diagonal elements of the matrix capture the number of correct pronunciations for each dysarthric speaker. As the degree of impairment increases, the number of phones correctly pronounced by the speaker decreases. Thus the trace of the confusion matrix can be used as objective cue to assess di?erent severity levels of dysarthria based on a threshold rule. Our proposed objective measure correlates with the standard Frenchay dysarthric assessment scores by 74 % on Nemours database. The measure also correlates with the intelligibility scores by 82 % on universal access dysarthric speech database. PMID:25996705

  17. Assessing Mathematics Automatically Using Computer Algebra and the Internet

    ERIC Educational Resources Information Center

    Sangwin, Chris

    2004-01-01

    This paper reports some recent developments in mathematical computer-aided assessment which employs computer algebra to evaluate students' work using the Internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university…

  18. Cell Processing Engineering for Regenerative Medicine : Noninvasive Cell Quality Estimation and Automatic Cell Processing.

    PubMed

    Takagi, Mutsumi

    2016-01-01

    The cell processing engineering including automatic cell processing and noninvasive cell quality estimation of adherent mammalian cells for regenerative medicine was reviewed. Automatic cell processing necessary for the industrialization of regenerative medicine was introduced. The cell quality such as cell heterogeneity should be noninvasively estimated before transplantation to patient, because cultured cells are usually not homogeneous but heterogeneous and most protocols of regenerative medicine are autologous system. The differentiation level could be estimated by two-dimensional cell morphology analysis using a conventional phase-contrast microscope. The phase-shifting laser microscope (PLM) could determine laser phase shift at all pixel in a view, which is caused by the transmitted laser through cell, and might be more noninvasive and more useful than the atomic force microscope and digital holographic microscope. The noninvasive determination of the laser phase shift of a cell using a PLM was carried out to determine the three-dimensional cell morphology and estimate the cell cycle phase of each adhesive cell and the mean proliferation activity of a cell population. The noninvasive discrimination of cancer cells from normal cells by measuring the phase shift was performed based on the difference in cytoskeleton density. Chemical analysis of the culture supernatant was also useful to estimate the differentiation level of a cell population. A probe beam, an infrared beam, and Raman spectroscopy are useful for diagnosing the viability, apoptosis, and differentiation of each adhesive cell. PMID:25373455

  19. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  20. Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775

    ERIC Educational Resources Information Center

    Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard

    2010-01-01

    Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic" performance…

  1. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. PMID:27268736

  2. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  3. Automatic assessment of macular edema from color retinal images.

    PubMed

    Deepak, K Sai; Sivaswamy, Jayanthi

    2012-03-01

    Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution. PMID:22167598

  4. Towards Automatic Diabetes Case Detection and ABCS Protocol Compliance Assessment

    PubMed Central

    Mishra, Ninad K.; Son, Roderick Y.; Arnzen, James J.

    2012-01-01

    Objective According to the American Diabetes Association, the implementation of the standards of care for diabetes has been suboptimal in most clinical settings. Diabetes is a disease that had a total estimated cost of $174 billion in 2007 for an estimated diabetes-affected population of 17.5 million in the United States. With the advent of electronic medical records (EMR), tools to analyze data residing in the EMR for healthcare surveillance can help reduce the burdens experienced today. This study was primarily designed to evaluate the efficacy of employing clinical natural language processing to analyze discharge summaries for evidence indicating a presence of diabetes, as well as to assess diabetes protocol compliance and high risk factors. Methods Three sets of algorithms were developed to analyze discharge summaries for: (1) identification of diabetes, (2) protocol compliance, and (3) identification of high risk factors. The algorithms utilize a common natural language processing framework that extracts relevant discourse evidence from the medical text. Evidence utilized in one or more of the algorithms include assertion of the disease and associated findings in medical text, as well as numerical clinical measurements and prescribed medications. Results The diabetes classifier was successful at classifying reports for the presence and absence of diabetes. Evaluated against 444 discharge summaries, the classifier’s performance included macro and micro F-scores of 0.9698 and 0.9865, respectively. Furthermore, the protocol compliance and high risk factor classifiers showed promising results, with most F-measures exceeding 0.9. Conclusions The presented approach accurately identified diabetes in medical discharge summaries and showed promise with regards to assessment of protocol compliance and high risk factors. Utilizing free-text analytic techniques on medical text can complement clinical-public health decision support by identifying cases and high risk

  5. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  6. Statistical quality assessment of a fingerprint

    NASA Astrophysics Data System (ADS)

    Hwang, Kyungtae

    2004-08-01

    The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.

  7. Automated FMV image quality assessment based on power spectrum statistics

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  8. Assessing quality across healthcare subsystems in Mexico.

    PubMed

    Puig, Andrea; Pagán, José A; Wong, Rebeca

    2009-01-01

    Recent healthcare reform efforts in Mexico have focused on the need to improve the efficiency and equity of a fragmented healthcare system. In light of these reform initiatives, there is a need to assess whether healthcare subsystems are effective at providing high-quality healthcare to all Mexicans. Nationally representative household survey data from the 2006 Encuesta Nacional de Salud y Nutrición (National Health and Nutrition Survey) were used to assess perceived healthcare quality across different subsystems. Using a sample of 7234 survey respondents, we found evidence of substantial heterogeneity in healthcare quality assessments across healthcare subsystems favoring private providers over social security institutions. These differences across subsystems remained even after adjusting for socioeconomic, demographic, and health factors. Our analysis suggests that improvements in efficiency and equity can be achieved by assessing the factors that contribute to heterogeneity in quality across subsystems. PMID:19305224

  9. Quality Assessment in the Blog Space

    ERIC Educational Resources Information Center

    Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan

    2010-01-01

    Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…

  10. Quality indicators and quality assessment in child health

    PubMed Central

    Kavanagh, Patricia L.; Adams, William G.; Wang, C. Jason

    2009-01-01

    Quality indicators are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes. In this review, we highlight the range and type of indicators that have been developed for children in the UK and US by prominent governmental agencies and private organizations. We also classify these indicators in an effort to identify areas of child health that may lack quality measurement activity. We review the current state of health information technology in both countries since these systems are vital to quality efforts. Finally, we propose several recommendations to advance the quality indicator development agenda for children. The convergence of quality measurement and indicator development, a growing scientific evidence base and integrated information systems in healthcare may lead to substantial improvements for child health in the 21st century. PMID:19307196

  11. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  12. Soil Quality Assessment: Past, Present, and Future

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality assessment can help land owners and managers appreciate the multiple functions that soils perform and thus improve the resource management decisions they make. Our objective is to show how the Soil Management Assessment Framework (SMAF) can complement the Soil Conditioning Index (SCI) a...

  13. National Water-Quality Assessment Program - Source Water-Quality Assessments

    USGS Publications Warehouse

    Delzer, Gregory C.; Hamilton, Pixie A.

    2007-01-01

    In 2002, the National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) implemented Source Water-Quality Assessments (SWQAs) to characterize the quality of selected rivers and aquifers used as a source of supply to community water systems in the United States. These assessments are intended to complement drinking-water monitoring required by Federal, State, and local programs, which focus primarily on post-treatment compliance monitoring.

  14. ANSS Backbone Station Quality Assessment

    NASA Astrophysics Data System (ADS)

    Leeds, A.; McNamara, D.; Benz, H.; Gee, L.

    2006-12-01

    In this study we assess the ambient noise levels of the broadband seismic stations within the United States Geological Survey's (USGS) Advanced National Seismic System (ANSS) backbone network. The backbone consists of stations operated by the USGS as well as several regional network stations operated by universities. We also assess the improved detection capability of the network due to the installation of 13 additional backbone stations and the upgrade of 26 existing stations funded by the Earthscope initiative. This assessment makes use of probability density functions (PDF) of power spectral densities (PSD) (after McNamara and Buland, 2004) computed by a continuous noise monitoring system developed by the USGS- ANSS and the Incorporated Research Institutions in Seismology (IRIS) Data Management Center (DMC). We compute the median and mode of the PDF distribution and rank the stations relative to the Peterson Low noise model (LNM) (Peterson, 1993) for 11 different period bands. The power of the method lies in the fact that there is no need to screen the data for system transients, earthquakes or general data artifacts since they map into a background probability level. Previous studies have shown that most regional stations, instrumented with short period or extended short period instruments, have a higher noise level in all period bands while stations in the US network have lower noise levels at short periods (0.0625-8.0 seconds), high frequencies (8.0- 0.125Hz). The overall network is evaluated with respect to accomplishing the design goals set for the USArray/ANSS backbone project which were intended to increase broadband performance for the national monitoring network.

  15. Water quality assessment in Ecuador

    SciTech Connect

    Chudy, J.P.; Arniella, E.; Gil, E.

    1993-02-01

    The El Tor cholera pandemic arrived in Ecuador in March 1991, and through the course of the year caused 46,320 cases, of which 692 resulted in death. Most of the cases were confined to cities along Ecuador's coast. The Water and Sanitation for Health Project (WASH), which was asked to participate in the review of this request, suggested that a more comprehensive approach should be taken to cholera control and prevention. The approach was accepted, and a multidisciplinary team consisting of a sanitary engineer, a hygiene education specialist, and an institutional specialist was scheduled to carry out the assessment in late 1992 following the national elections.

  16. Automatic assessment of the motor state of the Parkinson's disease patient--a case study

    PubMed Central

    2012-01-01

    This paper presents a novel methodology in which the Unified Parkinson's Disease Rating Scale (UPDRS) data processed with a rule-based decision algorithm is used to predict the state of the Parkinson's Disease patients. The research was carried out to investigate whether the advancement of the Parkinson's Disease can be automatically assessed. For this purpose, past and current UPDRS data from 47 subjects were examined. The results show that, among other classifiers, the rough set-based decision algorithm turned out to be most suitable for such automatic assessment. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1563339375633634. PMID:22340508

  17. [Radiological assessment of bone quality].

    PubMed

    Ito, Masako

    2016-01-01

    Structural property of bone includes micro- or nano-structural property of the trabecular and cortical bone, and macroscopic geometry. Radiological technique is useful to analyze the bone structural property;micro-CT or synchrotron-CT is available to analyze micro- or nano-structural property of bone samples ex vivo, and multi-detector row CT(MDCT)or high-resolution peripheral QCT(HR-pQCT)is available to analyze human bone in vivo. For the analysis of hip geometry, CT-based hip structure analysis(HSA)is available aw sell se radiography and DXA-based HSA. These structural parameters are related to biomechanical property, and these assessment tools provide information of pathological changes or the effects of anti-osteoporotic agents on bone. PMID:26728530

  18. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  19. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  20. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  1. Biosignal analysis to assess mental stress in automatic driving of trucks: palmar perspiration and masseter electromyography.

    PubMed

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  2. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  3. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  4. Phase congruency assesses hyperspectral image quality

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhong, Cheng

    2012-10-01

    Blind image quality assessment (QA) is a tough task especially for hyperspectral imagery which is degraded by noise, distortion, defocus, and other complex factors. Subjective hyperspectral imagery QA methods are basically measured the degradation of image from human perceptual visual quality. As the most important image quality measurement features, noise and blur, determined the image quality greatly, are employed to predict the objective hyperspectral imagery quality of each band. We demonstrate a novel no-reference hyperspectral imagery QA model based on phase congruency (PC), which is a dimensionless quantity and provides an absolute measure of the significance of feature point. First, Log Gabor wavelet is used to calculate the phase congruency of frequencies of each band image. The relationship between noise and PC can be derived from above transformation under the assumption that noise is additive. Second, PC focus measure evaluation model is proposed to evaluate blur caused by different amounts of defocus. The ratio and mean factors of edge blur level and noise is defined to assess the quality of each band image. This image QA method obtains excellent correlation with subjective image quality score without any reference. Finally, the PC information is utilized to improve the quality of some bands images.

  5. [Internal Quality Control and External Quality Assessment on POCT].

    PubMed

    Kuwa, Katsuhiko

    2015-02-01

    The quality management (QM) of POCT summarizes its internal quality control (IQC) and external quality assessment (EQA). For QM requirements in POCT, ISO 22870-Point-of-care testing (POCT) -Requirements for quality and competence and ISO 15189-Medical laboratories-Requirements for quality and competence, it is performed under the guidance of the QM committee. The role of the POC coordinator and/or medical technologist of the clinical laboratory is important. On measurement performance of POCT devices, it is necessary to confirm data on measurement performance from the manufacturer other than those in the inserted document. In the IQC program, the checking and control of measurement performance are the targets. On measurements of QC samples by the manufacturer, it is essential to check the function of devices. In addition, regarding the EQA program, in 2 neighboring facilities, there is an effect to confirm the current status of measurement and commutability assessment in these laboratories using whole blood along with residual blood samples from daily examinations in the clinical laboratory. PMID:26529974

  6. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    SciTech Connect

    Scholey, J; White, B; Qi, S; Low, D

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the image intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the clinical

  7. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  8. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    PubMed Central

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  9. Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto

    2012-02-01

    Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.

  10. An algorithm used for quality criterion automatic measurement of band-pass filters and its device implementation

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Liu, Yan; Yu, Feihong

    2013-08-01

    As a kind of film device, band-pass filter is widely used in pattern recognition, infrared detection, optical fiber communication, etc. In this paper, an algorithm for automatic measurement of band-pass filter quality criterion is proposed based on the proven theory calculation of derivate spectral transmittance of filter formula. Firstly, wavelet transform to reduce spectrum data noises is used. Secondly, combining with the Gaussian curve fitting and least squares method, the algorithm fits spectrum curve and searches the peak. Finally, some parameters for judging band-pass filter quality are figure out. Based on the algorithm, a pipeline for band-pass filters automatic measurement system has been designed that can scan the filter array automatically and display spectral transmittance of each filter. At the same time, the system compares the measuring result with the user defined standards to determine if the filter is qualified or not. The qualified product will be market with green color, and the unqualified product will be marked with red color. With the experiments verification, the automatic measurement system basically realized comprehensive, accurate and rapid measurement of band-pass filter quality and achieved the expected results.

  11. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  12. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  13. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Additional quality assessment activities. 460.140... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional quality assessment activities. A PACE organization must meet external quality assessment and reporting...

  14. Water quality issues and energy assessments

    SciTech Connect

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  15. An assessment model for quality management

    NASA Astrophysics Data System (ADS)

    Völcker, Chr.; Cass, A.; Dorling, A.; Zilioli, P.; Secchi, P.

    2002-07-01

    SYNSPACE together with InterSPICE and Alenia Spazio is developing an assessment method to determine the capability of an organisation in the area of quality management. The method, sponsored by the European Space Agency (ESA), is called S9kS (SPiCE- 9000 for SPACE). S9kS is based on ISO 9001:2000 with additions from the quality standards issued by the European Committee for Space Standardization (ECSS) and ISO 15504 - Process Assessments. The result is a reference model that supports the expansion of the generic process assessment framework provided by ISO 15504 to nonsoftware areas. In order to be compliant with ISO 15504, requirements from ISO 9001 and ECSS-Q-20 and Q-20-09 have been turned into process definitions in terms of Purpose and Outcomes, supported by a list of detailed indicators such as Practices, Work Products and Work Product Characteristics. In coordination with this project, the capability dimension of ISO 15504 has been revised to be consistent with ISO 9001. As contributions from ISO 9001 and the space quality assurance standards are separable, the stripped down version S9k offers organisations in all industries an assessment model based solely on ISO 9001, and is therefore interesting to all organisations, which intend to improve their quality management system based on ISO 9001.

  16. Recognition and Assessment of Teaching Quality.

    ERIC Educational Resources Information Center

    Fairbrother, Patricia

    1996-01-01

    Identifies models for consideration of teacher quality and competence in nursing education. Presents a range of evaluation criteria in these categories: preparation, delivery, innovation, communication, self-assessment, instructional management, peer recognition, professional memberships and service, publications, and grants and contracts secured.…

  17. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  18. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  19. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  20. [Making best use of external quality assessment].

    PubMed

    Fried, Roman

    2015-02-01

    To receive a maximum benefit from external quality assessment, the laboratory has to fulfill certain requirements. There have to be standard operating procedures and checklists for the correct sample processing and analysis. It is equally important, that the staff has a basic understanding how this quality-tools work and that detected errors are used as a chance to improve the processes within the laboratory. The benefit of surveys for external quality assessment is not only limited to the analytical phase, but also to some aspects of the pre and post-analytical processes. Due to the many participants, the survey providers are able to collect a lot of practical knowledge. All participants can learn from this by reading the survey reports and commentaries. With this, and with special educational surveys, the providers of surveys are able to offer an opportunity of continuous education to the laboratories. PMID:25630289

  1. Quality assessment: A performance-based approach to assessments

    SciTech Connect

    Caplinger, W.H.; Greenlee, W.D.

    1993-08-01

    Revision C to US Department of Energy (DOE) Order 5700.6 (6C) ``Quality Assurance`` (QA) brings significant changes to the conduct of QA. The Westinghouse government-owned, contractor-operated (GOCO) sites have updated their quality assurance programs to the requirements and guidance of 6C, and are currently implementing necessary changes. In late 1992, a Westinghouse GOCO team led by the Waste Isolation Division (WID) conducted what is believed to be the first assessment of implementation of a quality assurance program founded on 6C.

  2. Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease

    PubMed Central

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667

  3. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  4. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future. PMID:25412885

  5. Automated Assessment of the Quality of Depression Websites

    PubMed Central

    Tang, Thanh Tin; Hawking, David; Christensen, Helen

    2005-01-01

    Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than

  6. Bone age assessment in young children using automatic carpal bone feature extraction and support vector regression.

    PubMed

    Somkantha, Krit; Theera-Umpon, Nipon; Auephanwiriyakul, Sansanee

    2011-12-01

    Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists. PMID:21347746

  7. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    NASA Astrophysics Data System (ADS)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  8. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  9. Image quality assessment using multi-method fusion.

    PubMed

    Liu, Tsung-Jung; Lin, Weisi; Kuo, C-C Jay

    2013-05-01

    A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called "context-dependent MMF" (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases. PMID:23288335

  10. Water Quality Assessment using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  11. Quantitative assessment of computed radiography quality control parameters.

    PubMed

    Rampado, O; Isoardi, P; Ropolo, R

    2006-03-21

    Quality controls for testing the performance of computed radiography (CR) systems have been recommended by manufacturers and medical physicists' organizations. The purpose of this work was to develop a set of image processing tools for quantitative assessment of computed radiography quality control parameters. Automatic image analysis consisted in detecting phantom details, defining regions of interest and acquiring measurements. The tested performance characteristics included dark noise, uniformity, exposure calibration, linearity, low-contrast and spatial resolution, spatial accuracy, laser beam function and erasure thoroughness. CR devices from two major manufacturers were evaluated. We investigated several approaches to quantify the detector response uniformity. We developed methods to characterize the spatial accuracy and resolution properties across the entire image area, based on the Fourier analysis of the image of a fine wire mesh. The implemented methods were sensitive to local blurring and allowed us to detect a local distortion of 4% or greater in any part of an imaging plate. The obtained results showed that the developed image processing tools allow us to implement a quality control program for CR with short processing time and with absence of subjectivity in the evaluation of the parameters. PMID:16510964

  12. Automated data quality assessment of marine sensors.

    PubMed

    Timms, Greg P; de Souza, Paulo A; Reznik, Leon; Smith, Daniel V

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data's fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert. PMID:22163714

  13. Automated Data Quality Assessment of Marine Sensors

    PubMed Central

    Timms, Greg P.; de Souza, Paulo A.; Reznik, Leon; Smith, Daniel V.

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data’s fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert. PMID:22163714

  14. Teachers' Opinions on Quality Criteria for Competency Assessment Programs

    ERIC Educational Resources Information Center

    Baartman, Liesbeth K. J.; Bastiaens, Theo J.; Kirschner, Paul A.; Van der Vleuten, Cees P. M.

    2007-01-01

    Quality control policies towards Dutch vocational schools have changed dramatically because the government questioned examination quality. Schools must now demonstrate assessment quality to a new Examination Quality Center. Since teachers often design assessments, they must be involved in quality issues. This study therefore explores teachers'…

  15. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  16. Milk quality and automatic milking: fat globule size, natural creaming, and lipolysis.

    PubMed

    Abeni, F; Degano, L; Calza, F; Giangiacomo, R; Pirlo, G

    2005-10-01

    Thirty-eight Italian Friesian first-lactation cows were allocated to 2 groups to evaluate the effect of 1) an automatic milking system (AMS) vs. milking in a milking parlor (MP) on milk fat characteristics; and 2) milking interval (< or =480, 481 to 600, 601 to 720, and >720 min) on the same variables. Milk fat was analyzed for content (% vol/vol), natural creaming (% of fat), and free fatty acids (FFA, mEq/100 g of fat). Distribution of milk fat globule size was evaluated to calculate average fat globule diameter (d(1)), volume-surface average diameter (d(32)), specific globule surface area, and mean interglobular distance. Milk yield was recorded to calculate hourly milk and milk fat yield. Milking system had no effect on milk yield, milk fat content, and hourly milk fat yield. Milk from AMS had less natural creaming and more FFA content than milk from MP. Fat globule size, globular surface area, and interglobular distance were not affected by milking system per se. Afternoon MP milkings had more fat content and hourly milk fat yield than AMS milkings when milking interval was >480 min. Milk fat FFA content was greater in AMS milkings when milking interval was < or =480 min than in milkings from MP and from AMS when milking interval was >600 min. Milking interval did not affect fat globule size, expressed as d32. Results from this experiment indicate a limited effect of AMS per se on milk fat quality; a more important factor seems to be the increase in milking frequency, generally associated with AMS. PMID:16162526

  17. Automatic Detection of Masses in Mammograms Using Quality Threshold Clustering, Correlogram Function, and SVM.

    PubMed

    de Nazaré Silva, Joberth; de Carvalho Filho, Antonio Oseas; Corrêa Silva, Aristófanes; Cardoso de Paiva, Anselmo; Gattass, Marcelo

    2015-06-01

    Breast cancer is the second most common type of cancer in the world. Several computer-aided detection and diagnosis systems have been used to assist health experts and to indicate suspect areas that would be difficult to perceive by the human eye; this approach has aided in the detection and diagnosis of cancer. The present work proposes a method for the automatic detection of masses in digital mammograms by using quality threshold (QT), a correlogram function, and the support vector machine (SVM). This methodology comprises the following steps: The first step is to perform preprocessing with a low-pass filter, which increases the scale of the contrast, and the next step is to use an enhancement to the wavelet transform with a linear function. After the preprocessing is segmentation using QT; then, we perform post-processing, which involves the selection of the best mass candidates. This step is performed by analyzing the shape descriptors through the SVM. For the stage that involves the extraction of texture features, we used Haralick descriptors and a correlogram function. In the classification stage, the SVM was again used for training, validation, and final test. The results were as follows: sensitivity 92.31 %, specificity 82.2 %, accuracy 83.53 %, mean rate of false positives per image 1.12, and area under the receiver operating characteristic (ROC) curve 0.8033. Breast cancer is notable for presenting the highest mortality rate in addition to one of the smallest survival rates after diagnosis. An early diagnosis means a considerable increase in the survival chance of the patients. The methodology proposed herein contributes to the early diagnosis and survival rate and, thus, proves to be a useful tool for specialists who attempt to anticipate the detection of masses. PMID:25277539

  18. Automatic Vertebral Fracture Assessment System (AVFAS) for Spinal Pathologies Diagnosis Based on Radiograph X-Ray Images

    NASA Astrophysics Data System (ADS)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abd; Bin Abdul Hamid, Hamzaini; Ariffin, Ahmad Kamal

    Nowadays, medical imaging has become a major tool in many clinical trials. This is because the technology enables rapid diagnosis with visualization and quantitative assessment that facilitate health practitioners or professionals. Since the medical and healthcare sector is a vast industry that is very much related to every citizen's quality of life, the image based medical diagnosis has become one of the important service areas in this sector. As such, a medical diagnostic imaging (MDI) software tool for assessing vertebral fracture is being developed which we have named as AVFAS short for Automatic Vertebral Fracture Assessment System. The developed software system is capable of indexing, detecting and classifying vertebral fractures by measuring the shape and appearance of vertebrae of radiograph x-ray images of the spine. This paper describes the MDI software tool which consists of three main sub-systems known as Medical Image Training & Verification System (MITVS), Medical Image and Measurement & Decision System (MIMDS) and Medical Image Registration System (MIRS) in term of its functionality, performance, ongoing research and outstanding technical issues.

  19. Assessing the quality of cost management

    SciTech Connect

    Fayne, V.; McAllister, A.; Weiner, S.B.

    1995-12-31

    Managing environmental programs can be effective only when good cost and cost-related management practices are developed and implemented. The Department of Energy`s Office of Environmental Management (EM), recognizing this key role of cost management, initiated several cost and cost-related management activities including the Cost Quality Management (CQM) Program. The CQM Program includes an assessment activity, Cost Quality Management Assessments (CQMAs), and a technical assistance effort to improve program/project cost effectiveness. CQMAs provide a tool for establishing a baseline of cost-management practices and for measuring improvement in those practices. The result of the CQMA program is an organization that has an increasing cost-consciousness, improved cost-management skills and abilities, and a commitment to respond to the public`s concerns for both a safe environment and prudent budget outlays. The CQMA program is part of the foundation of quality management practices in DOE. The CQMA process has contributed to better cost and cost-related management practices by providing measurements and feedback; defining the components of a quality cost-management system; and helping sites develop/improve specific cost-management techniques and methods.

  20. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  1. Automatic brain tumour detection and neovasculature assessment with multiseries MRI analysis.

    PubMed

    Szwarc, Pawel; Kawa, Jacek; Rudzki, Marcin; Pietka, Ewa

    2015-12-01

    In this paper a novel multi-stage automatic method for brain tumour detection and neovasculature assessment is presented. First, the brain symmetry is exploited to register the magnetic resonance (MR) series analysed. Then, the intracranial structures are found and the region of interest (ROI) is constrained within them to tumour and peritumoural areas using the Fluid Light Attenuation Inversion Recovery (FLAIR) series. Next, the contrast-enhanced lesions are detected on the basis of T1-weighted (T1W) differential images before and after contrast medium administration. Finally, their vascularisation is assessed based on the Regional Cerebral Blood Volume (RCBV) perfusion maps. The relative RCBV (rRCBV) map is calculated in relation to a healthy white matter, also found automatically, and visualised on the analysed series. Three main types of brain tumours, i.e. HG gliomas, metastases and meningiomas have been subjected to the analysis. The results of contrast enhanced lesions detection have been compared with manual delineations performed independently by two experts, yielding 64.84% sensitivity, 99.89% specificity and 71.83% Dice Similarity Coefficient (DSC) for twenty analysed studies of subjects with brain tumours diagnosed. PMID:26183648

  2. Quality Assessment of Domesticated Animal Genome Assemblies

    PubMed Central

    Seemann, Stefan E.; Anthon, Christian; Palasca, Oana; Gorodkin, Jan

    2015-01-01

    The era of high-throughput sequencing has made it relatively simple to sequence genomes and transcriptomes of individuals from many species. In order to analyze the resulting sequencing data, high-quality reference genome assemblies are required. However, this is still a major challenge, and many domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data and genome assembly quality. We rank the genomes by their assembly quality and discuss the implications for genotype analyses. PMID:27279738

  3. Engineering studies related to Skylab program. [assessment of automatic gain control data

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.

    1973-01-01

    The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.

  4. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  5. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  6. Assessing the Quality of Bioforensic Signatures

    SciTech Connect

    Sego, Landon H.; Holmes, Aimee E.; Gosink, Luke J.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Brothers, Alan J.; Corley, Courtney D.; Tardiff, Mark F.

    2013-06-04

    We present a mathematical framework for assessing the quality of signature systems in terms of fidelity, cost, risk, and utility—a method we refer to as Signature Quality Metrics (SQM). We demonstrate the SQM approach by assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system consists of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated fifteen combinations of the signature system by removing one or more of the assays from the Bayes network. We demonstrated that SQM can be used to distinguish between the various combinations in terms of attributes of interest. The approach assisted in clearly identifying assays that were least informative, largely in part because they only could discriminate between very few culture media, and in particular, culture media that are rarely used. There are limitations associated with the data that were used to train and test the signature system. Consequently, our intent is not to draw formal conclusions regarding this particular bioforensic system, but rather to illustrate an analytical approach that could be useful in comparing one signature system to another.

  7. Assessing Assessment Quality: Criteria for Quality Assurance in Design of (Peer) Assessment for Learning--A Review of Research Studies

    ERIC Educational Resources Information Center

    Tillema, Harm; Leenknecht, Martijn; Segers, Mien

    2011-01-01

    The interest in "assessment for learning" (AfL) has resulted in a search for new modes of assessment that are better aligned to students' learning how to learn. However, with the introduction of new assessment tools, also questions arose with respect to the quality of its measurement. On the one hand, the appropriateness of traditional,…

  8. No-reference stereoscopic image quality assessment

    NASA Astrophysics Data System (ADS)

    Akhter, Roushain; Parvez Sazzad, Z. M.; Horita, Y.; Baltes, J.

    2010-02-01

    Display of stereo images is widely used to enhance the viewing experience of three-dimensional imaging and communication systems. In this paper, we propose a method for estimating the quality of stereoscopic images using segmented image features and disparity. This method is inspired by the human visual system. We believe the perceived distortion and disparity of any stereoscopic display is strongly dependent on local features, such as edge (non-plane) and non-edge (plane) areas. Therefore, a no-reference perceptual quality assessment is developed for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the blur within the block of images are evaluated in this method. Two subjective stereo image databases are used to evaluate the performance of our method. The subjective experiments results indicate our model has sufficient prediction performance.

  9. Image quality assessment and human visual system

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  10. Quality assessment of strawberries (Fragaria species).

    PubMed

    Azodanlou, Ramin; Darbellay, Charly; Luisier, Jean-Luc; Villettaz, Jean-Claude; Amadò, Renato

    2003-01-29

    Several cultivars of strawberries (Fragaria sp.), grown under different conditions, were analyzed by both sensory and instrumental methods. The overall appreciation, as expressed by consumers, was mainly reflected by attributes such as sweetness and aroma. No strong correlation was obtained with odor, acidity, juiciness, or firmness. The sensory quality of strawberries can be assessed with a good level of confidence by measuring the total sugar level ( degrees Brix) and the total amount of volatile compounds. Sorting out samples using the score obtained with a hedonic test (called the "hedonic classification method") allowed the correlation between consumers' appreciation and instrumental data to be considerably strengthened. On the basis of the results obtained, a quality model was proposed. Quantitative GC-FID analyses were performed to determine the major aroma components of strawberries. Methyl butanoate, ethyl butanoate, methyl hexanoate, cis-3-hexenyl acetate, and linalool were identified as the most important compounds for the taste and aroma of strawberries. PMID:12537447

  11. Bacteriological Assessment of Spoon River Water Quality

    PubMed Central

    Lin, Shundar; Evans, Ralph L.; Beuscher, Davis B.

    1974-01-01

    Data from a study of five stations on the Spoon River, Ill., during June 1971 through May 1973 were analyzed for compliance with Illinois Pollution Control Board's water quality standards of a geometric mean limitation of 200 fecal coliforms per 100 ml. This bacterial limit was achieved about 20% of the time during June 1971 through May 1972, and was never achieved during June 1972 through May 1973. Ratios of fecal coliform to total coliform are presented. By using fecal coliform-to-fecal streptococcus ratios to sort out fecal pollution origins, it was evident that a concern must be expressed not only for municipal wastewater effluents to the receiving stream, but also for nonpoint sources of pollution in assessing the bacterial quality of a stream. PMID:4604145

  12. NATIONAL CROP LOSS ASSESSMENT NETWORK: QUALITY ASSURANCE PROGRAM (JOURNAL VERSION)

    EPA Science Inventory

    A quality assurance program was incorporated into the National Crop Loss Assessment Network (NCLAN) program, designed to assess the economic impacts of gaseous air pollutants on major agricultural crops in the United States. The quality assurance program developed standardized re...

  13. Peer Review and Quality Assessment in Complete Denture Education.

    ERIC Educational Resources Information Center

    Novetsky, Marvin; Razzoog, Michael E.

    1981-01-01

    A program in peer review and quality assessment at the University of Michigan denture department is described. The program exposes students to peer review in order to assess the quality of their treatment. (Author/MLW)

  14. A device for automatic measurement of writhing and its application to the assessment of analgesic agents.

    PubMed

    Adachi, K

    1994-10-01

    A device was developed for automatically measuring writhing in mice so as to be applied to the assessment of analgesic agents. The device was composed of a specially designed container equipped with a detector, namely, a mechanoelectro transducer for writhing. The detector was made up of units of a string, two plates, and two strain gauges. In the unit, each end of the string was connected to either of the plates to which either of the strain gauges was attached. The change in tension of the string due to writhing was converted into the mechanical strain of the plates and then the resistance change of the strain gauges. The resistance change was amplified by a Wheatstone bridge circuit that was connected to a differential amplifier, a high-pass filter, comparator(s), and a monostable multivibrator to obtain the electrical signal for writhing. Using this device, writhing was continuously measured, and evaluation of various types of analgesic agents was performed. The result suggests that this device has sufficient accuracy both for the detection of writhing and the evaluation of analgesics. It has the advantage of automatic measurement of writhing in contrast to the conventional visual observation method. PMID:7865865

  15. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup. PMID:26471786

  16. Quantitative assessment of automatic reconstructions of branching systems obtained from laser scanning

    PubMed Central

    Boudon, Frédéric; Preuksakarn, Chakkrit; Ferraro, Pascal; Diener, Julien; Nacry, Philippe; Nikinmaa, Eero; Godin, Christophe

    2014-01-01

    Background and Aims Automatic acquisition of plant architecture is a major challenge for the construction of quantitative models of plant development. Recently, 3-D laser scanners have made it possible to acquire 3-D images representing a sampling of an object's surface. A number of specific methods have been proposed to reconstruct plausible branching structures from this new type of data, but critical questions remain regarding their suitability and accuracy before they can be fully exploited for use in biological applications. Methods In this paper, an evaluation framework to assess the accuracy of tree reconstructions is presented. The use of this framework is illustrated on a selection of laser scans of trees. Scanned data were manipulated by experienced researchers to produce reference tree reconstructions against which comparisons could be made. The evaluation framework is given two tree structures and compares both their elements and their topological organization. Similar elements are identified based on geometric criteria using an optimization algorithm. The organization of these elements is then compared and their similarity quantified. From these analyses, two indices of geometrical and structural similarities are defined, and the automatic reconstructions can thus be compared with the reference structures in order to assess their accuracy. Key Results The evaluation framework that was developed was successful at capturing the variation in similarities between two structures as different levels of noise were introduced. The framework was used to compare three different reconstruction methods taken from the literature, and allowed sensitive parameters of each one to be determined. The framework was also generalized for the evaluation of root reconstruction from 2-D images and demonstrated its sensitivity to higher architectural complexity of structure which was not detected with a global evaluation criterion. Conclusions The evaluation framework

  17. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning

    PubMed Central

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2016-01-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients (fv) to account for most of the spectrum energy (Σfv2). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth—the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The ‘leave-one-out’ cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves (R = 91%−96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  18. No training blind image quality assessment

    NASA Astrophysics Data System (ADS)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  19. Quality Assessment Dimensions of Distance Teaching/Learning Curriculum Designing

    ERIC Educational Resources Information Center

    Volungeviciene, Airina; Tereseviciene, Margarita

    2008-01-01

    The paper presents scientific literature analysis in the area of distance teaching/learning curriculum designing and quality assessment. The aim of the paper is to identify quality assessment dimensions of distance teaching/learning curriculum designing. The authors of the paper agree that quality assessment should be considered during the…

  20. Performance assessment of an RFID system for automatic surgical sponge detection in a surgery room.

    PubMed

    Dinis, H; Zamith, M; Mendes, P M

    2015-01-01

    A retained surgical instrument is a frequent incident in medical surgery rooms all around the world, despite being considered an avoidable mistake. Hence, an automatic detection solution of the retained surgical instrument is desirable. In this paper, the use of millimeter waves at the 60 GHz band for surgical material RFID purposes is evaluated. An experimental procedure to assess the suitability of this frequency range for short distance communications with multiple obstacles was performed. Furthermore, an antenna suitable to be incorporated in surgical materials, such as sponges, is presented. The antenna's operation characteristics are evaluated as to determine if it is adequate for the studied application over the given frequency range, and under different operating conditions, such as varying sponge water content. PMID:26736960

  1. Content-aware objective video quality assessment

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, Benhur; Niño-Castañeda, Jorge; Platiša, Ljiljana; Philips, Wilfried

    2016-01-01

    Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

  2. Image quality assessment in the low quality regime

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2012-03-01

    Traditionally, image quality estimators have been designed and optimized to operate over the entire quality range of images in a database, from very low quality to visually lossless. However, if quality estimation is limited to a smaller quality range, their performances drop dramatically, and many image applications only operate over such a smaller range. This paper is concerned with one such range, the low-quality regime, which is defined as the interval of perceived quality scores where there exists a linear relationship between the perceived quality scores and the perceived utility scores and exists at the low-quality end of image databases. Using this definition, this paper describes a subjective experiment to determine the low-quality regime for databases of distorted images that include perceived quality scores but not perceived utility scores, such as CSIQ and LIVE. The performances of several image utility and quality estimators are evaluated in the low-quality regime, indicating that utility estimators can be successfully applied to estimate perceived quality in this regime. Omission of the lowestfrequency image content is shown to be crucial to the performances of both kinds of estimators. Additionally, this paper establishes an upper-bound for the performances of quality estimators in the LQR, using a family of quality estimators based on VIF. The resulting optimal quality estimator indicates that estimating quality in the low-quality regime is robust to exact frequency pooling weights, and that near-optimal performance can be achieved by a variety of estimators providing that they substantially emphasize the appropriate frequency content.

  3. 2003 SNL ASCI applications software quality engineering assessment report.

    SciTech Connect

    Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.

    2004-02-01

    This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.

  4. X-ray absorptiometry of the breast using mammographic exposure factors: application to units featuring automatic beam quality selection.

    PubMed

    Kotre, C J

    2010-06-01

    A number of studies have identified the relationship between the visual appearance of high breast density at mammography and an increased risk of breast cancer. Approaches to quantify the amount of glandular tissue within the breast from mammography have so far concentrated on image-based methods. Here, it is proposed that the X-ray parameters automatically selected by the mammography unit can be used to estimate the thickness of glandular tissue overlying the automatic exposure sensor area, provided that the unit can be appropriately calibrated. This is a non-trivial task for modern mammography units that feature automatic beam quality selection, as the number of tube potential and X-ray target/filter combinations used to cover the range of breast sizes and compositions can be large, leading to a potentially unworkable number of curve fits and interpolations. Using appropriate models for the attenuation of the glandular breast in conjunction with a constrained set of physical phantom measurements, it is demonstrated that calibration for X-ray absorptiometry can be achieved despite the large number of possible exposure factor combinations employed by modern mammography units. The main source of error on the estimated glandular tissue thickness using this method is shown to be uncertainty in the measured compressed breast thickness. An additional correction for this source of error is investigated and applied. Initial surveys of glandular thickness for a cohort of women undergoing breast screening are presented. PMID:20505033

  5. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  6. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    NASA Technical Reports Server (NTRS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  7. Quality assessment of Landsat surface reflectance products using MODIS data

    NASA Astrophysics Data System (ADS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric F.; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  8. Validation of the automatic image analyser to assess retinal vessel calibre (ALTAIR): a prospective study protocol

    PubMed Central

    Garcia-Ortiz, Luis; Gómez-Marcos, Manuel A; Recio-Rodríguez, Jose I; Maderuelo-Fernández, Jose A; Chamoso-Santos, Pablo; Rodríguez-González, Sara; de Paz-Santana, Juan F; Merchan-Cifuentes, Miguel A; Corchado-Rodríguez, Juan M

    2014-01-01

    Introduction The fundus examination is a non-invasive evaluation of the microcirculation of the retina. The aim of the present study is to develop and validate (reliability and validity) the ALTAIR software platform (Automatic image analyser to assess retinal vessel calibre) in order to analyse its utility in different clinical environments. Methods and analysis A cross-sectional study in the first phase and a prospective observational study in the second with 4 years of follow-up. The study will be performed in a primary care centre and will include 386 participants. The main measurements will include carotid intima-media thickness, pulse wave velocity by Sphygmocor, cardio-ankle vascular index through the VASERA VS-1500, cardiac evaluation by a digital ECG and renal injury by microalbuminuria and glomerular filtration. The retinal vascular evaluation will be performed using a TOPCON TRCNW200 non-mydriatic retinal camera to obtain digital images of the retina, and the developed software (ALTAIR) will be used to automatically calculate the calibre of the retinal vessels, the vascularised area and the branching pattern. For software validation, the intraobserver and interobserver reliability, the concurrent validity of the vascular structure and function, as well as the association between the estimated retinal parameters and the evolution or onset of new lesions in the target organs or cardiovascular diseases will be examined. Ethics and dissemination The study has been approved by the clinical research ethics committee of the healthcare area of Salamanca. All study participants will sign an informed consent to agree to participate in the study in compliance with the Declaration of Helsinki and the WHO standards for observational studies. Validation of this tool will provide greater reliability to the analysis of retinal vessels by decreasing the intervention of the observer and will result in increased validity through the use of additional information, especially

  9. Fully automatic measuring system for assessing masticatory performance using β-carotene-containing gummy jelly.

    PubMed

    Nokubi, T; Yasui, S; Yoshimuta, Y; Kida, M; Kusunoki, C; Ono, T; Maeda, Y; Nokubi, F; Yokota, K; Yamamoto, T

    2013-02-01

    Despite the importance of masticatory performance in health promotion, assessment of masticatory performance has not been widely conducted to date because the methods are labour intensive. The purpose of this study is to investigate the accuracy of a novel system for automatically measuring masticatory performance that uses β-carotene-containing gummy jelly. To investigate the influence of rinsing time on comminuted jelly pieces expectorated from the oral cavity, divided jelly pieces were treated with two types of dye solution and then rinsed for various durations. Changes in photodiode (light receiver) voltages from light emitted through a solution of dissolved β-carotene from jelly pieces under each condition were compared with those of unstained jelly. To investigate the influence of dissolving time, changes in light receiver voltage resulting from an increase in division number were compared between three dissolving times. For all forms of divided test jelly and rinsing times, no significant differences in light receiver voltage were observed between any of the stain groups and the control group. Voltages decreased in a similar manner for all forms of divided jelly as dissolving time increased. The highest coefficient of determination (R(2)  = 0·979) between the obtained voltage and the increased surface area of each divided jelly was seen at the 10 s dissolving time. These results suggested that our fully automatic system can estimate the increased surface area of comminuted gummy jelly as a parameter of masticatory performance with high accuracy after rinsing and dissolving operations of 10 s each. PMID:22882741

  10. On the dependence of information display quality requirements upon human characteristics and pilot/automatics relations

    NASA Technical Reports Server (NTRS)

    Wilckens, V.

    1972-01-01

    Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.

  11. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    USGS Publications Warehouse

    Arnold, Terri L.; Desimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, Marylynn; Kingsbury, James A.; Belitz, Kenneth

    2016-01-01

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  12. Quality assessment of clinical computed tomography

    NASA Astrophysics Data System (ADS)

    Berndt, Dorothea; Luckow, Marlen; Lambrecht, J. Thomas; Beckmann, Felix; Müller, Bert

    2008-08-01

    Three-dimensional images are vital for the diagnosis in dentistry and cranio-maxillofacial surgery. Artifacts caused by highly absorbing components such as metallic implants, however, limit the value of the tomograms. The dominant artifacts observed are blowout and streaks. Investigating the artifacts generated by metallic implants in a pig jaw, the data acquisition for the patients in dentistry should be optimized in a quantitative manner. A freshly explanted pig jaw including related soft-tissues served as a model system. Images were recorded varying the accelerating voltage and the beam current. The comparison with multi-slice and micro computed tomography (CT) helps to validate the approach with the dental CT system (3D-Accuitomo, Morita, Japan). The data are rigidly registered to comparatively quantify their quality. The micro CT data provide a reasonable standard for quantitative data assessment of clinical CT.

  13. Set up of an automatic water quality sampling system in irrigation agriculture.

    PubMed

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2013-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  14. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  15. Examining the Importance of Assessing Rapid Automatized Naming (RAN) for the Identification of Children with Reading Difficulties

    ERIC Educational Resources Information Center

    Georgiou, George K.; Parrila, Rauno; Manolitsis, George; Kirby, John R.

    2011-01-01

    The purpose of this study was to assess the diagnostic value of rapid automatized naming (RAN) in the identification of poor readers in two alphabetic orthographies: English and Greek. Ninety-seven English-speaking Canadian (mean age = 66.70 months) and 70 Greek children (mean age = 67.60 months) were followed from Kindergarten until Grade 3. In…

  16. Assessing the Effects of Automatically Delivered Stimulation on the Use of Simple Exercise Tools by Students with Multiple Disabilities.

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop

    2003-01-01

    This study assessed the effects of automatically delivered stimulation on the activity level and mood of three students with multiple disabilities during their use of a stepper and a stationary bicycle. Stimuli from a pool of favorite stimulus events were delivered electronically while students were actively exercising. Findings indicated the…

  17. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  18. Comprehensive automatic assessment of retinal vascular abnormalities for computer-assisted retinopathy grading.

    PubMed

    Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time. PMID:25571442

  19. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  20. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a)...

  1. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a)...

  2. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a)...

  3. Video quality assessment for web content mirroring

    NASA Astrophysics Data System (ADS)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  4. Beef quality assessed at European research centres.

    PubMed

    Dransfield, E; Nute, G R; Roberts, T A; Boccard, R; Touraille, C; Buchter, L; Casteels, M; Cosentino, E; Hood, D E; Joseph, R L; Schon, I; Paardekooper, E J

    1984-01-01

    Loin steaks and cubes of M. semimembranosus from eight (12 month old) Galloway steers and eight (16-18 month old) Charolais cross steers raised in England and from which the meat was conditioned for 2 or 10 days, were assessed in research centres in Belgium, Denmark, England, France, the Federal Republic of Germany, Ireland, Italy and the Netherlands. Laboratory panels assessed meat by grilling the steaks and cooking the cubes in casseroles according to local custom using scales developed locally and by scales used frequently at other research centres. The meat was mostly of good quality but with sufficient variation to obtain meaningful comparisons. Tenderness and juiciness were assessed most, and flavour least, consistently. Over the 32 meats, acceptability of steaks and casseroles was in general compounded from tenderness, juiciness and flavour. However, when the meat was tough, it dominated the overall judgement; but when tender, flavour played an important rôle. Irish and English panels tended to weight more on flavour and Italian panels on tenderness and juiciness. Juciness and tenderness were well correlated among all panels except in Italy and Germany. With flavour, however, Belgian, Irish, German and Dutch panels ranked the meats similarly and formed a group distinct from the others which did not. The panels showed a similar grouping for judgements of acceptability. French and Belgian panels judged the steaks from the older Charolais cross steers to have more flavour and be more juicy than average and tended to prefer them. Casseroles from younger steers were invariably preferred although the French and Belgian panels judged aged meat from older animals equally acceptable. These regional biases were thought to be derived mainly from differences in cooking, but variations in experience and perception of assessors also contributed. PMID:22055992

  5. Assessing Negative Automatic Thoughts: Psychometric Properties of the Turkish Version of the Cognition Checklist

    PubMed Central

    Batmaz, Sedat; Ahmet Yuncu, Ozgur; Kocbiyik, Sibel

    2015-01-01

    Background: Beck’s theory of emotional disorder suggests that negative automatic thoughts (NATs) and the underlying schemata affect one’s way of interpreting situations and result in maladaptive coping strategies. Depending on their content and meaning, NATs are associated with specific emotions, and since they are usually quite brief, patients are often more aware of the emotion they feel. This relationship between cognition and emotion, therefore, is thought to form the background of the cognitive content specificity hypothesis. Researchers focusing on this hypothesis have suggested that instruments like the cognition checklist (CCL) might be an alternative to make a diagnostic distinction between depression and anxiety. Objectives: The aim of the present study was to assess the psychometric properties of the Turkish version of the CCL in a psychiatric outpatient sample. Patients and Methods: A total of 425 psychiatric outpatients 18 years of age and older were recruited. After a structured diagnostic interview, the participants completed the hospital anxiety depression scale (HADS), the automatic thoughts questionnaire (ATQ), and the CCL. An exploratory factor analysis was performed, followed by an oblique rotation. The internal consistency, test-retest reliability, and concurrent and discriminant validity analyses were undertaken. Results: The internal consistency of the CCL was excellent (Cronbach’s α = 0.95). The test-retest correlation coefficients were satisfactory (r = 0.80, P < 0.001 for CCL-D, and r = 0.79, P < 0.001 for CCL-A). The exploratory factor analysis revealed that a two-factor solution best fit the data. This bidimensional factor structure explained 51.27 % of the variance of the scale. The first factor consisted of items related to anxious cognitions, and the second factor of depressive cognitions. The CCL subscales significantly correlated with the ATQ (rs 0.44 for the CCL-D, and 0.32 for the CCL-A) as well as the other measures of

  6. QUALITY: A program to assess basis set quality

    NASA Astrophysics Data System (ADS)

    Sordo, J. A.

    1998-09-01

    A program to analyze in detail the quality of basis sets is presented. The information provided by the application of a wide variety of (atomic and/or molecular) quality criteria is processed by using a methodology that allows one to determine the most appropriate quality test to select a basis set to compute a given (atomic or molecular) property. Fuzzy set theory is used to choose the most adequate basis set to compute simultaneously a set of properties.

  7. The Impact of Quality Assessment in Universities: Portuguese Students' Perceptions

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Santiago, Rui; Sarrico, Claudia S.

    2012-01-01

    Despite being one of the major reasons for the development of quality assessment, students seem relatively unaware of its potential impact. Since one of the main purposes of assessment is to provide students with information on the quality of universities, this lack of awareness brings in to question the effectiveness of assessment as a device for…

  8. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  9. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  10. Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    PubMed Central

    Jochem, Andreas; Höfle, Bernhard; Rutzinger, Martin; Pfeifer, Norbert

    2009-01-01

    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2. PMID:22346695

  11. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  12. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  13. Federal Workforce Quality: Measurement and Improvement. Report of the Advisory Committee on Federal Workforce Quality Assessment.

    ERIC Educational Resources Information Center

    Office of Personnel Management, Washington, DC.

    The Advisory Committee on Federal Workforce Quality Assessment was chartered to examine various work force quality assessment efforts in the federal government and provide advice on their adequacy and suggestions on their improvement or expansion. Objective data in recent research suggested that a universal decline in work force quality might not…

  14. Quality assessment for spectral domain optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady, III; Markey, Mia K.; Milner, Thomas E.

    2009-02-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment parameters, signal to noise ratio (SNR) and signal strength (SS), are also calculated for each OCT image. The results of the objective assessment are compared with subjective assessment. In the subjective assessment, one OCT expert graded the image quality according to a three-level scale (good, fair, and poor). The OCT B-scan images of the retina from six subjects are evaluated by both objective and subjective assessment. From the comparison, we demonstrate that the objective assessment successfully differentiates between the acceptable quality images (good and fair images) and poor quality OCT images as graded by OCT experts. We evaluate the performance of the objective assessment under different quality assessment parameters and demonstrate that SD is the best at distinguishing between fair and good quality images. The accuracy of RNFL thickness estimation is improved significantly after poor quality OCT images are rejected by automated objective assessment using the SD, SNR, and SS.

  15. Assessing soil quality in organic agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality is directly linked to food production, food security, and environmental quality (i.e. water quality, global warming, and energy use in food production). Unfortunately, moderate to severe degeneration of soils (i.e., loss of soil biodiversity, poor soil tilth, and unbalanced elemental c...

  16. Informatics: essential infrastructure for quality assessment and improvement in nursing.

    PubMed Central

    Henry, S B

    1995-01-01

    In recent decades there have been major advances in the creation and implementation of information technologies and in the development of measures of health care quality. The premise of this article is that informatics provides essential infrastructure for quality assessment and improvement in nursing. In this context, the term quality assessment and improvement comprises both short-term processes such as continuous quality improvement (CQI) and long-term outcomes management. This premise is supported by 1) presentation of a historical perspective on quality assessment and improvement; 2) delineation of the types of data required for quality assessment and improvement; and 3) description of the current and potential uses of information technology in the acquisition, storage, transformation, and presentation of quality data, information, and knowledge. PMID:7614118

  17. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    NASA Astrophysics Data System (ADS)

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-05-01

    Climate change has already a large impact on the availability of water resources. Many regions in South-East Asia are assumed to receive less water in the future, dramatically impacting the production of the most important staple food: rice (Oryza sativa L.). Rice is the primary food source for nearly half of the World's population, and is the only cereal that can grow under wetland conditions. Especially anaerobic (flooded) rice fields require high amounts of water but also have higher yields than aerobic produced rice. In the past different methods were developed to reduce the water use in rice paddies, like alternative wetting and drying or the use of mixed cropping systems with aerobic (non-flooded) rice and alternative crops such as maize. A more detailed understanding of water and nutrient cycling in rice-based cropping systems is needed to reduce water use, and requires the investigation of hydrological and biochemical processes as well as transport dynamics at the field scale. New developments in analytical devices permit monitoring parameters at high temporal resolutions and at acceptable costs without much necessary maintenance or analysis over longer periods. Here we present a new type of automatic sampling set-up that facilitates in situ analysis of hydrometric information, stable water isotopes and nitrate concentrations in spatially differentiated agricultural fields. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer for monitoring nitrate content and various water level sensors for hydrometric information. The whole system is maintained with special developed software for remote control of the system via internet. We

  18. Stereoscopic image quality assessment using disparity-compensated view filtering

    NASA Astrophysics Data System (ADS)

    Song, Yang; Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2016-03-01

    Stereoscopic image quality assessment (IQA) plays a vital role in stereoscopic image/video processing systems. We propose a new quality assessment for stereoscopic image that uses disparity-compensated view filtering (DCVF). First, because a stereoscopic image is composed of different frequency components, DCVF is designed to decompose it into high-pass and low-pass components. Then, the qualities of different frequency components are acquired according to their phase congruency and coefficient distribution characteristics. Finally, support vector regression is utilized to establish a mapping model between the component qualities and subjective qualities, and stereoscopic image quality is calculated using this mapping model. Experiments on the LIVE 3-D IQA database and NBU 3-D IQA databases demonstrate that the proposed method can evaluate stereoscopic image quality accurately. Compared with several state-of-the-art quality assessment methods, the proposed method is more consistent with human perception.

  19. In Search of Quality Criteria in Peer Assessment Practices

    ERIC Educational Resources Information Center

    Ploegh, Karin; Tillema, Harm H.; Segers, Mien S. R.

    2009-01-01

    With the increasing popularity of peer assessment as an assessment tool, questions may arise about its measurement quality. Among such questions, the extent peer assessment practices adhere to standards of measurement. It has been claimed that new forms of assessment, require new criteria to judge their validity and reliability, since they aim for…

  20. Recent advances in soil quality assessment in the United States

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality is a concept that is useful as an educational and assessment tool. A number of assessment tools have been developed including: the Soil Conditioning Index (SCI), the Soil Management Assessment Framework (SMAF), the AgroEcosystem Performance Assessment Tool (AEPAT), and the new Cornell “...

  1. Different Academics' Characteristics, Different Perceptions on Quality Assessment?

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Rosa, Maria Joao; Santos, Cristina S.

    2013-01-01

    Purpose: The purpose of this paper is to explore Portuguese academics' perceptions on higher education quality assessment objectives and purposes, in general, and on the recently implemented system for higher education quality assessment and accreditation, in particular. It aims to discuss the differences of those perceptions dependent on some…

  2. Academics' Perceptions on the Purposes of Quality Assessment

    ERIC Educational Resources Information Center

    Rosa, Maria J.; Sarrico, Claudia S.; Amaral, Alberto

    2012-01-01

    The accountability versus improvement debate is an old one. Although being traditionally considered dichotomous purposes of higher education quality assessment, some authors defend the need of balancing both in quality assessment systems. This article goes a step further and contends that not only they should be balanced but also that other…

  3. Research Quality Assessment in Education: Impossible Science, Possible Art?

    ERIC Educational Resources Information Center

    Bridges, David

    2009-01-01

    For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This article explores some of the problems and possibilities, with particular reference to…

  4. Educational Quality Assessment: Manual for Interpreting School Reports.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Bureau of Educational Quality Assessment.

    The results of the Pennsylvania Educational Quality Assessment program, Phase II, are interpreted. The first section of the manual presents a statement of each of the Ten Goals of Quality Education which served as the basis of the assessment. Also included are the key items on the questionnaires administered to 5th and 11th grade students. The…

  5. Quality Assessment of Internationalised Studies: Theory and Practice

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  6. Development and Validation of Assessing Quality Teaching Rubrics

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  7. Higher Education Quality Assessment in China: An Impact Study

    ERIC Educational Resources Information Center

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  8. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    ERIC Educational Resources Information Center

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  9. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    ERIC Educational Resources Information Center

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  10. On Improving Higher Vocational College Education Quality Assessment

    NASA Astrophysics Data System (ADS)

    Wu, Xiang; Chen, Yan; Zhang, Jie; Wang, Yi

    Teaching quality assessment is a judgment process by using the theory and technology of education evaluation system to test whether the process and result of teaching have got to a certain quality level. Many vocational schools have established teaching quality assessment systems of their own characteristics as the basic means to do self-examination and teaching behavior adjustment. Combined with the characteristics and requirements of the vocational education and by analyzing the problems exist in contemporary vocational school, form the perspective of the content, assessment criteria and feedback system of the teaching quality assessment to optimize the system, to complete the teaching quality information net and offer suggestions for feedback channels, to make the institutionalization, standardization of the vocational schools and indeed to make contribution for the overall improvement of the quality of vocational schools.

  11. a Multi-Sensor Micro Uav Based Automatic Rapid Mapping System for Damage Assessment in Disaster Areas

    NASA Astrophysics Data System (ADS)

    Jeon, E.; Choi, K.; Lee, I.; Kim, H.

    2013-08-01

    Damage assessment is an important step toward the restoration of the severely affected areas due to natural disasters or accidents. For more accurate and rapid assessment, one should utilize geospatial data such as ortho-images acquired from the damaged areas. Change detection based on the geospatial data before and after the damage can make possible fast and automatic assessment with a reasonable accuracy. Accordingly, there have been significant demands on a rapid mapping system, which can provide the orthoimages of the damaged areas to the specialists and decision makers in disaster management agencies. In this study, we are developing a UAV based rapid mapping system that can acquire multi-sensory data in the air and generate ortho-images from the data on the ground in a rapid and automatic way. The proposed system consists of two main segments, aerial and ground segments. The aerial segment is to acquire sensory data through autonomous flight over the specified target area. It consists of a micro UAV platform, a mirror-less camera, a GPS, a MEMS IMU, and sensor integration and synchronization module. The ground segment is to receive and process the multi-sensory data to produce orthoimages in rapid and automatic ways. It consists of a computer with appropriate software for flight planning, data reception, georeferencing, and orthoimage generation. In the middle of this on-going project, we will introduce the overview of the project, describe the main components of each segment and provide intermediate results from preliminary test flights.

  12. Germination tests for assessing biochar quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Understanding the impact of biochar quality on soil productivity is crucial to the agronomic acceptance of biochar amendments. Our objective in this study was to develop a quick and reliable screening procedures to characterize the quality of biochar amendments. Biochars were evaluated by both seed ...

  13. SOIL QUALITY ASSESSMENT USING FUZZY MODELING

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

  14. Comparison of water-quality samples collected by siphon samplers and automatic samplers in Wisconsin

    USGS Publications Warehouse

    Graczyk, David J.; Robertson, Dale M.; Rose, William J.; Steur, Jeffrey J.

    2000-01-01

    In small streams, flow and water-quality concentrations often change quickly in response to meteorological events. Hydrologists, field technicians, or locally hired stream ob- servers involved in water-data collection are often unable to reach streams quickly enough to observe or measure these rapid changes. Therefore, in hydrologic studies designed to describe changes in water quality, a combination of manual and automated sampling methods have commonly been used manual methods when flow is relatively stable and automated methods when flow is rapidly changing. Auto- mated sampling, which makes use of equipment programmed to collect samples in response to changes in stage and flow of a stream, has been shown to be an effective method of sampling to describe the rapid changes in water quality (Graczyk and others, 1993). Because of the high cost of automated sampling, however, especially for studies examining a large number of sites, alternative methods have been considered for collecting samples during rapidly changing stream conditions. One such method employs the siphon sampler (fig. 1). also referred to as the "single-stage sampler." Siphon samplers are inexpensive to build (about $25- $50 per sampler), operate, and maintain, so they are cost effective to use at a large number of sites. Their ability to collect samples representing the average quality of water passing though the entire cross section of a stream, however, has not been fully demonstrated for many types of stream sites.

  15. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  16. Assessing the Quality of MT Systems for Hindi to English Translation

    NASA Astrophysics Data System (ADS)

    Kalyani, Aditi; Kumud, Hemant; Pal Singh, Shashi; Kumar, Ajai

    2014-03-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given.

  17. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  18. Visual air quality assessment: Denver case study

    NASA Astrophysics Data System (ADS)

    Mumpower, Jeryl; Middleton, Paulette; Dennis, Robin L.; Stewart, Thomas R.; Veirs, Val

    Studies of visual air quality in the Denver metropolitan region during summer 1979 and winter 1979-1980 are described and results reported. The major objective of the studies was to investigate relationships among four types of variables important to urban visual air quality: (1) individuals' judgements of overall visual air quality; (2) perceptual cues used in making judgments of visual air quality; (3) measurable physical characteristics of the visual environment and (4) concentrations of visibility-reducing pollutants and their precursors. During August 1979 and mid-December 1979 to January 1980, simultaneous measurements of observational and environmental data were made daily at various locations throughout the metropolitan area. Observational data included ratings of overall air quality and related perceptual cues (e.g., distance, clarity, color, border) by multiple observers. Environmental data included routine hourly pollutant and meteorological measurements from several fixed locations within the city, as well as aerosol light scattering and absorption measures from one location. Statistical analyses indicated that (1) multiple perceptual cues are required to explain variation in judgments of overall visual air quality and (2) routine measurements of the physical environment appear to be inadequate predictors of either judgments of overall visual air quality or related perceptual cues.

  19. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. PMID:27313114

  20. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    SciTech Connect

    Li, X; Li, H; Wu, Y; Mutic, S; Yang, D

    2014-06-01

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System.

  1. WATER QUALITY ASSESSMENT OF AMERICAN FALLS RESERVOIR

    EPA Science Inventory

    A water quality model was developed to support a TMDL for phosphorus related to phytoplankton growth in the reservoir. This report documents the conceptual model, available data, model evaluation, and simulation results.

  2. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  3. Teacher Quality and Quality Teaching: Examining the Relationship of a Teacher Assessment to Practice

    ERIC Educational Resources Information Center

    Hill, Heather C.; Umland, Kristin; Litke, Erica; Kapitula, Laura R.

    2012-01-01

    Multiple-choice assessments are frequently used for gauging teacher quality. However, research seldom examines whether results from such assessments generalize to practice. To illuminate this issue, we compare teacher performance on a mathematics assessment, during mathematics instruction, and by student performance on a state assessment. Poor…

  4. Automatic assessment of volume asymmetries applied to hip abductor muscles in patients with hip arthroplasty

    NASA Astrophysics Data System (ADS)

    Klemt, Christian; Modat, Marc; Pichat, Jonas; Cardoso, M. J.; Henckel, Joahnn; Hart, Alister; Ourselin, Sebastien

    2015-03-01

    Metal-on-metal (MoM) hip arthroplasties have been utilised over the last 15 years to restore hip function for 1.5 million patients worldwide. Althoug widely used, this hip arthroplasty releases metal wear debris which lead to muscle atrophy. The degree of muscle wastage differs across patients ranging from mild to severe. The longterm outcomes for patients with MoM hip arthroplasty are reduced for increasing degrees of muscle atrophy, highlighting the need to automatically segment pathological muscles. The automated segmentation of pathological soft tissues is challenging as these lack distinct boundaries and morphologically differ across subjects. As a result, there is no method reported in the literature which has been successfully applied to automatically segment pathological muscles. We propose the first automated framework to delineate severely atrophied muscles by applying a novel automated segmentation propagation framework to patients with MoM hip arthroplasty. The proposed algorithm was used to automatically quantify muscle wastage in these patients.

  5. Mapping coal quality parameters for economic assessments

    SciTech Connect

    Hohn, M.E.; Smith, C.J.; Ashton, K.C.; McColloch, G.H. Jr.

    1988-08-01

    This study recommends mapping procedures for a data base of coal quality parameters. The West Virginia Geological and Economic Survey has developed a data base that includes about 10,000 analyses of coal samples representing most seams in West Virginia. Coverage is irregular and widely spaced; minimal sample spacing is generally greater than 1 mi. Geologists use this data base to answer public and industry requests for maps that show areas meeting coal quality specifications.

  6. Space shuttle flying qualities and criteria assessment

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.

    1987-01-01

    Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.

  7. Key Elements for Judging the Quality of a Risk Assessment

    PubMed Central

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  8. Physical and Chemical Water-Quality Data from Automatic Profiling Systems, Boulder Basin, Lake Mead, Arizona and Nevada, Water Years 2001-04

    USGS Publications Warehouse

    Rowland, Ryan C.; Westenburg, Craig L.; Veley, Ronald J.; Nylund, Walter E.

    2006-01-01

    Water-quality profile data were collected in Las Vegas Bay and near Sentinel Island in Lake Mead, Arizona and Nevada, from October 2000 to September 2004. The majority of the profiles were completed with automatic variable-buoyancy systems equipped with multiparameter water-quality sondes. Profile data near Sentinel Island were collected in August 2004 with an automatic variable-depth-winch system also equipped with a multiparameter water-quality sonde. Physical and chemical water properties collected and recorded by the profiling systems, including depth, water temperature, specific conductance, pH, dissolved-oxygen concentration, and turbidity are listed in tables and selected water-quality profile data are shown in graphs.

  9. Application of case classification in healthcare quality assessment in China.

    PubMed

    Xu, Ping; Li, Meina; Zhang, Lulu; Sun, Qingwen; Lv, Shinwei; Lian, Bin; Wei, Min; Kan, Zhang

    2012-01-01

    The purpose of this study was to build a healthcare quality assessment system with disease category as the basic unit of assessment based on the principles of case classification, and to assess the quality of care in a large hospital in Shanghai. Using the Delphi method, four quality indicators were selected. The data of 124,125 patients discharged from a large general hospital in Shanghai, from October 1, 2004 to September 30, 2007, were used to establish quality indicators estimates for each disease. The data of 51,760 discharged patients from October 1, 2007 to September 30, 2008 were used as the testing sample, and the standard scores of each quality indicator for each clinical department were calculated. Then the total score of various clinical departments in the hospital was calculated based on the differences between the practical scores and the standard. Based on quality assessment scores, we found that the quality of healthcare in departments of thyroid and mammary gland surgery, obstetrics and gynaecology, stomatology, dermatology, and paediatrics was better than in other departments. Implementation of the case classification for healthcare quality assessment permitted the comparison of quality among different healthcare departments. PMID:22700559

  10. a Photogrammetric Appraoch for Automatic Traffic Assessment Using Conventional Cctv Camera

    NASA Astrophysics Data System (ADS)

    Zarrinpanjeh, N.; Dadrassjavan, F.; Fattahi, H.

    2015-12-01

    One of the most practical tools for urban traffic monitoring is CCTV imaging which is widely used for traffic map generation and updating through human surveillance. But due to the expansion of urban road network and the use of huge number of CCTV cameras, visual inspection and updating of traffic sometimes seems to be ineffective and time consuming and therefore not providing real-time robust update. In this paper a method for vehicle detection accounting and speed estimation is proposed to give a more automated solution for traffic assessment. Through removing violating objects and detection of vehicles via morphological filtering and also classification of moving objects at the scene vehicles are counted and traffic speed is estimated. The proposed method is developed and tested using two datasets and evaluation values are computed. The results show that the successfulness of the algorithm decreases by about 12 % due to decrease in illumination quality of imagery.