Science.gov

Sample records for automatic quality assessment

  1. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  2. Automatic quality assessment protocol for MRI equipment.

    PubMed

    Bourel, P; Gibon, D; Coste, E; Daanen, V; Rousseau, J

    1999-12-01

    The authors have developed a protocol and software for the quality assessment of MRI equipment with a commercial test object. Automatic image analysis consists of detecting surfaces and objects, defining regions of interest, acquiring reference point coordinates and establishing gray level profiles. Signal-to-noise ratio, image uniformity, geometrical distortion, slice thickness, slice profile, and spatial resolution are checked. The results are periodically analyzed to evaluate possible drifts with time. The measurements are performed weekly on three MRI scanners made by the Siemens Company (VISION 1.5T, EXPERT 1.0T, and OPEN 0.2T). The results obtained for the three scanners over approximately 3.5 years are presented, analyzed, and compared. PMID:10619255

  3. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  4. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  5. Algorithm for Automatic Forced Spirometry Quality Assessment: Technological Developments

    PubMed Central

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community. PMID:25551213

  6. Automatic MeSH term assignment and quality assessment.

    PubMed Central

    Kim, W.; Aronson, A. R.; Wilbur, W. J.

    2001-01-01

    For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203

  7. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. PMID:24661606

  8. Automatic Assessment of Pathological Voice Quality Using Higher-Order Statistics in the LPC Residual Domain

    NASA Astrophysics Data System (ADS)

    Lee, Ji Yeoun; Hahn, Minsoo

    2010-12-01

    A preprocessing scheme based on linear prediction coefficient (LPC) residual is applied to higher-order statistics (HOSs) for automatic assessment of an overall pathological voice quality. The normalized skewness and kurtosis are estimated from the LPC residual and show statistically meaningful distributions to characterize the pathological voice quality. 83 voice samples of the sustained vowel /a/ phonation are used in this study and are independently assessed by a speech and language therapist (SALT) according to the grade of the severity of dysphonia of GRBAS scale. These are used to train and test classification and regression tree (CART). The best result is obtained using an optima l decision tree implemented by a combination of the normalized skewness and kurtosis, with an accuracy of 92.9%. It is concluded that the method can be used as an assessment tool, providing a valuable aid to the SALT during clinical evaluation of an overall pathological voice quality.

  9. Particle quality assessment and sorting for automatic and semiautomatic particle-picking techniques.

    PubMed

    Vargas, J; Abrishami, V; Marabini, R; de la Rosa-Trevín, J M; Zaldivar, A; Carazo, J M; Sorzano, C O S

    2013-09-01

    Three-dimensional reconstruction of biological specimens using electron microscopy by single particle methodologies requires the identification and extraction of the imaged particles from the acquired micrographs. Automatic and semiautomatic particle selection approaches can localize these particles, minimizing the user interaction, but at the cost of selecting a non-negligible number of incorrect particles, which can corrupt the final three-dimensional reconstruction. In this work, we present a novel particle quality assessment and sorting method that can separate most erroneously picked particles from correct ones. The proposed method is based on multivariate statistical analysis of a particle set that has been picked previously using any automatic or manual approach. The new method uses different sets of particle descriptors, which are morphology-based, histogram-based and signal to noise analysis based. We have tested our proposed algorithm with experimental data obtaining very satisfactory results. The algorithm is freely available as a part of the Xmipp 3.0 package [http://xmipp.cnb.csic.es]. PMID:23933392

  10. Polarization transformation as an algorithm for automatic generalization and quality assessment

    NASA Astrophysics Data System (ADS)

    Qian, Haizhong; Meng, Liqiu

    2007-06-01

    Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.

  11. Groupwise conditional random forests for automatic shape classification and contour quality assessment in radiotherapy planning.

    PubMed

    McIntosh, Chris; Svistoun, Igor; Purdie, Thomas G

    2013-06-01

    Radiation therapy is used to treat cancer patients around the world. High quality treatment plans maximally radiate the targets while minimally radiating healthy organs at risk. In order to judge plan quality and safety, segmentations of the targets and organs at risk are created, and the amount of radiation that will be delivered to each structure is estimated prior to treatment. If the targets or organs at risk are mislabelled, or the segmentations are of poor quality, the safety of the radiation doses will be erroneously reviewed and an unsafe plan could proceed. We propose a technique to automatically label groups of segmentations of different structures from a radiation therapy plan for the joint purposes of providing quality assurance and data mining. Given one or more segmentations and an associated image we seek to assign medically meaningful labels to each segmentation and report the confidence of that label. Our method uses random forests to learn joint distributions over the training features, and then exploits a set of learned potential group configurations to build a conditional random field (CRF) that ensures the assignment of labels is consistent across the group of segmentations. The CRF is then solved via a constrained assignment problem. We validate our method on 1574 plans, consisting of 17[Formula: see text] 579 segmentations, demonstrating an overall classification accuracy of 91.58%. Our results also demonstrate the stability of RF with respect to tree depth and the number of splitting variables in large data sets. PMID:23475352

  12. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  13. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  14. Back-and-Forth Methodology for Objective Voice Quality Assessment: From/to Expert Knowledge to/from Automatic Classification of Dysphonia

    NASA Astrophysics Data System (ADS)

    Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine

    2009-12-01

    This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.

  15. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  16. Automatization of Student Assessment Using Multimedia Technology.

    ERIC Educational Resources Information Center

    Taniar, David; Rahayu, Wenny

    Most use of multimedia technology in teaching and learning to date has emphasized the teaching aspect only. An application of multimedia in examinations has been neglected. This paper addresses how multimedia technology can be applied to the automatization of assessment, by proposing a prototype of a multimedia question bank, which is able to…

  17. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  18. Self-assessing target with automatic feedback

    SciTech Connect

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  19. Toward automatic recognition of high quality clinical evidence.

    PubMed

    Kilicoglu, Halil; Demner-Fushman, Dina; Rindflesch, Thomas C; Wilczynski, Nancy L; Haynes, R Brian

    2008-01-01

    Automatic methods for recognizing topically relevant documents supported by high quality research can assist clinicians in practicing evidence-based medicine. We approach the challenge of identifying articles with high quality clinical evidence as a binary classification problem. Combining predictions from supervised machine learning methods and using deep semantic features, we achieve 73.5% precision and 67% recall. PMID:18998881

  20. Automatic Test-Based Assessment of Programming: A Review

    ERIC Educational Resources Information Center

    Douce, Christopher; Livingstone, David; Orwell, James

    2005-01-01

    Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems,…

  1. Automatic assessment of ultrasound image usability

    NASA Astrophysics Data System (ADS)

    Valente, Luca; Funka-Lea, Gareth; Stoll, Jeffrey

    2011-03-01

    We present a novel and efficient approach for evaluating the quality of ultrasound images. Image acquisition is sensitive to skin contact and transducer orientation and requires both time and technical skill to be done properly. Images commonly suffer degradation due to acoustic shadows and signal attenuation, which present as regions of low signal intensity masking anatomical details and making the images partly or totally unusable. As ultrasound image acquisition and analysis becomes increasingly automated, it is beneficial to also automate the estimation of image quality. Towards this end, we present an algorithm that classifies regions of an image as usable or un-usable. Example applications of this algorithm include improved compounding of free-hand 3D ultrasound volumes by eliminating unusable data and improved automatic feature detection by limiting detection to only usable areas. The algorithm operates in two steps. First, it classifies the image into bright areas, likely to have image content, and dark areas, likely to have no content. Second, it classifies the dark areas into unusable (i.e. due to shadowing and/or signal loss) and usable (i.e. anatomically accurate dark regions, such as with a blood vessel) sub-areas. The classification considers several factors, including statistical information, gradient intensity and geometric properties such as shape and relative position. Relative weighting of factors was obtained through the training of a Support Vector Machine. Classification results for both human and phantom images are presented and compared to manual classifications. This method achieves 91% sensitivity and 91% specificity for usable regions of human scans.

  2. [Quality assessment in surgery].

    PubMed

    Espinoza G, Ricardo; Espinoza G, Juan Pablo

    2016-06-01

    This paper deals with quality from the perspective of structure, processes and indicators in surgery. In this specialty, there is a close relationship between effectiveness and quality. We review the definition and classification of surgical complications as an objective means of assessing quality. The great diversity of definitions and risk assessments of surgical complications hampered the comparisons of different surgical centers or the evaluation of a single center along time. We discuss the different factors associated with surgical risk and some of the predictive systems for complications and mortality. At the present time, standarized definitions and comparisons are carried out correcting for risk factors. Thus, indicators of mortality, complications, hospitalization length, postoperative quality of life and costs become comparable between different groups. The volume of procedures of a determinate center or surgeon as a quality indicator is emphasized. PMID:27598495

  3. Quality Assessment in Oncology

    SciTech Connect

    Albert, Jeffrey M.; Das, Prajnan

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  4. Automatic phonetogram recording supplemented with acoustical voice-quality parameters.

    PubMed

    Pabon, J P; Plomp, R

    1988-12-01

    A new method for automatic voice-quality registration is presented. The method is based on a technique called phonetography, which is the registration of the dynamic range of a voice as a function of fundamental frequency. In the new phonetogram-recording method fundamental frequency (Fo) and sound-pressure level (SPL) are automatically measured and represented in an XY-diagram. Three additional acoustical voice-quality parameters are measured simultaneously with Fo and SPL: (a) jitter in the Fo as a measure for roughness, (b) the SPL difference between the 0-1.5 kHz and the 1.5-5 kHz bands as a measure for sharpness, and (c) the vocal-noise level above 5 kHz as a measure for breathiness. With this method, the voice-quality parameter values, which may change substantially as a function of Fo and SPL, are pinned to a reference position in the patient's total vocal range. Seen as a reference tool, the phonetogram opens the possibility for a more meaningful comparison of voice-quality data. Some examples, demonstrating the dependence of the chosen quality parameters on Fo and SPL are given. PMID:3230899

  5. The Educational Quality Assessment

    ERIC Educational Resources Information Center

    Dinsmore, Peter; And Others

    1976-01-01

    Pennsylvania's Educational Quality Assessment (EQA) program is discussed in terms of its historical background, ACLU objections to its alleged infringements of individual rights, and reactions of students who have taken it. Journal is available from Ritter Hall, Fourth Floor, Temple University, Philadelphia, Pa. 19122. (AV)

  6. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  7. An automatic method for CASP9 free modeling structure prediction assessment

    PubMed Central

    Cong, Qian; Kinch, Lisa N.; Pei, Jimin; Shi, Shuoyong; Grishin, Vyacheslav N.; Li, Wenlin; Grishin, Nick V.

    2011-01-01

    Motivation: Manual inspection has been applied to and is well accepted for assessing critical assessment of protein structure prediction (CASP) free modeling (FM) category predictions over the years. Such manual assessment requires expertise and significant time investment, yet has the problems of being subjective and unable to differentiate models of similar quality. It is beneficial to incorporate the ideas behind manual inspection to an automatic score system, which could provide objective and reproducible assessment of structure models. Results: Inspired by our experience in CASP9 FM category assessment, we developed an automatic superimposition independent method named Quality Control Score (QCS) for structure prediction assessment. QCS captures both global and local structural features, with emphasis on global topology. We applied this method to all FM targets from CASP9, and overall the results showed the best agreement with Manual Inspection Scores among automatic prediction assessment methods previously applied in CASPs, such as Global Distance Test Total Score (GDT_TS) and Contact Score (CS). As one of the important components to guide our assessment of CASP9 FM category predictions, this method correlates well with other scoring methods and yet is able to reveal good-quality models that are missed by GDT_TS. Availability: The script for QCS calculation is available at http://prodata.swmed.edu/QCS/. Contact: grishin@chop.swmed.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994223

  8. On Automatic Assessment and Conceptual Understanding

    ERIC Educational Resources Information Center

    Rasila, Antti; Malinen, Jarmo; Tiitu, Hannu

    2015-01-01

    We consider two complementary aspects of mathematical skills, i.e. "procedural fluency" and "conceptual understanding," from a point of view that is related to modern e-learning environments and computer-based assessment. Pedagogical background of teaching mathematics is discussed, and it is proposed that the traditional book…

  9. Automatic Summary Assessment for Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh

    2009-01-01

    Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…

  10. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  11. Joint Statement: Quality Assessment and Quality Audit.

    ERIC Educational Resources Information Center

    Scottish Higher Education Funding Council, Edinburgh.

    This document sets out the respective responsibilities of the Scottish Higher Education Funding Council (SHEFC) and the Higher Education Quality Council (HEQC) as they currently stand in the field of higher education quality assurance. The SHEFC and the HEQC are both agencies that fulfill legislatively mandated quality assessment and control…

  12. On the Use of Resubmissions in Automatic Assessment Systems

    ERIC Educational Resources Information Center

    Karavirta, Ville; Korhonen, Ari; Malmi, Lauri

    2006-01-01

    Automatic assessment systems generally support immediate grading and response on learners' submissions. They also allow learners to consider the feedback, revise, and resubmit their solutions. Several strategies exist to implement the resubmission policy. The ultimate goal, however, is to improve the learning outcomes, and thus the strategies…

  13. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  14. Automatic personality assessment through social media language.

    PubMed

    Park, Gregory; Schwartz, H Andrew; Eichstaedt, Johannes C; Kern, Margaret L; Kosinski, Michal; Stillwell, David J; Ungar, Lyle H; Seligman, Martin E P

    2015-06-01

    Language use is a psychologically rich, stable individual difference with well-established correlations to personality. We describe a method for assessing personality using an open-vocabulary analysis of language from social media. We compiled the written language from 66,732 Facebook users and their questionnaire-based self-reported Big Five personality traits, and then we built a predictive model of personality based on their language. We used this model to predict the 5 personality factors in a separate sample of 4,824 Facebook users, examining (a) convergence with self-reports of personality at the domain- and facet-level; (b) discriminant validity between predictions of distinct traits; (c) agreement with informant reports of personality; (d) patterns of correlations with external criteria (e.g., number of friends, political attitudes, impulsiveness); and (e) test-retest reliability over 6-month intervals. Results indicated that language-based assessments can constitute valid personality measures: they agreed with self-reports and informant reports of personality, added incremental validity over informant reports, adequately discriminated between traits, exhibited patterns of correlations with external criteria similar to those found with self-reported personality, and were stable over 6-month intervals. Analysis of predictive language can provide rich portraits of the mental life associated with traits. This approach can complement and extend traditional methods, providing researchers with an additional measure that can quickly and cheaply assess large groups of participants with minimal burden. PMID:25365036

  15. Towards A Clinical Tool For Automatic Intelligibility Assessment.

    PubMed

    Berisha, Visar; Utianski, Rene; Liss, Julie

    2013-01-01

    An important, yet under-explored, problem in speech processing is the automatic assessment of intelligibility for pathological speech. In practice, intelligibility assessment is often done through subjective tests administered by speech pathologists; however research has shown that these tests are inconsistent, costly, and exhibit poor reliability. Although some automatic methods for intelligibility assessment for telecommunications exist, research specific to pathological speech has been limited. Here, we propose an algorithm that captures important multi-scale perceptual cues shown to correlate well with intelligibility. Nonlinear classifiers are trained at each time scale and a final intelligibility decision is made using ensemble learning methods from machine learning. Preliminary results indicate a marked improvement in intelligibility assessment over published baseline results. PMID:25004985

  16. Towards A Clinical Tool For Automatic Intelligibility Assessment

    PubMed Central

    Berisha, Visar; Utianski, Rene; Liss, Julie

    2014-01-01

    An important, yet under-explored, problem in speech processing is the automatic assessment of intelligibility for pathological speech. In practice, intelligibility assessment is often done through subjective tests administered by speech pathologists; however research has shown that these tests are inconsistent, costly, and exhibit poor reliability. Although some automatic methods for intelligibility assessment for telecommunications exist, research specific to pathological speech has been limited. Here, we propose an algorithm that captures important multi-scale perceptual cues shown to correlate well with intelligibility. Nonlinear classifiers are trained at each time scale and a final intelligibility decision is made using ensemble learning methods from machine learning. Preliminary results indicate a marked improvement in intelligibility assessment over published baseline results. PMID:25004985

  17. Automatic quality control in clinical (1) H MRSI of brain cancer.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Knecht, Urspeter; Wiest, Roland; Slotboom, Johannes

    2016-05-01

    MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented

  18. Portfolio Assessment and Quality Teaching

    ERIC Educational Resources Information Center

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  19. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. PMID:24148491

  20. Automatic ECG quality scoring methodology: mimicking human annotators.

    PubMed

    Johannesen, Lars; Galeotti, Loriano

    2012-09-01

    An algorithm to determine the quality of electrocardiograms (ECGs) can enable inexperienced nurses and paramedics to record ECGs of sufficient diagnostic quality. Previously, we proposed an algorithm for determining if ECG recordings are of acceptable quality, which was entered in the PhysioNet Challenge 2011. In the present work, we propose an improved two-step algorithm, which first rejects ECGs with macroscopic errors (signal absent, large voltage shifts or saturation) and subsequently quantifies the noise (baseline, powerline or muscular noise) on a continuous scale. The performance of the improved algorithm was evaluated using the PhysioNet Challenge database (1500 ECGs rated by humans for signal quality). We achieved a classification accuracy of 92.3% on the training set and 90.0% on the test set. The improved algorithm is capable of detecting ECGs with macroscopic errors and giving the user a score of the overall quality. This allows the user to assess the degree of noise and decide if it is acceptable depending on the purpose of the recording. PMID:22902927

  1. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    NASA Astrophysics Data System (ADS)

    Gu, Lingyun; Harris, John G.; Shrivastav, Rahul; Sapienza, Christine

    2005-12-01

    Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW) and the Itakura-Saito (IS) distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  2. Solar Radiation Empirical Quality Assessment

    Energy Science and Technology Software Center (ESTSC)

    1994-03-01

    The SERIQC1 subroutine performs quality assessment of one, two, or three-component solar radiation data (global horizontal, direct normal, and diffuse horizontal) obtained from one-minute to one-hour integrations. Included in the package is the QCFIT tool to derive expected values from historical data, and the SERIQC1 subroutine to assess the quality of measurement data.

  3. WATER QUALITY ASSESSMENT METHODOLOGY (WQAM)

    EPA Science Inventory

    The Water Quality Assessment Methodology (WQAM) is a screening procedure for toxic and conventional pollutants in surface and ground waters and is a collection of formulas, tables, and graphs that planners can use for preliminary assessment of surface and ground water quality in ...

  4. MRI-Guided Target Motion Assessment using Dynamic Automatic Segmentation

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.

    Motion significantly impacts the radiotherapy process and represents one of the persisting problems in treatment delivery. In order to improve motion management techniques and implement future image guided radiotherapy tools such as MRI-guidance, automatic segmentation algorithms hold great promise. Such algorithms are attractive due to their direct measurement accuracy, speed, and ability to assess motion trajectories for daily treatment plan modifications. We developed and optimized an automatic segmentation technique to enable target tracking using MR cines, 4D-MRI, and 4D-CT. This algorithm overcomes weaknesses in automatic contouring such as lack of image contrast, subjectivity, slow speed, and lack of differentiating feature vectors by the use of morphological processing. The software is enhanced with predictive parameter capabilities and dynamic processing. The 4D-MRI images are acquired by applying a retrospective phase binning approach to radially-acquired MR image projections. The quantification of motion is validated with a motor phantom undergoing a known trajectory in 4D-CT, 4D-MRI, and in MR cines from the ViewRay MR-Guided RT system. In addition, a clinical case study demonstrates wide-reaching implications of the software to segment lesions in the brain and lung as well as critical structures such as the liver. Auto-segmentation results from MR cines of canines correlate well with manually drawn contours, both in terms of Dice similarity coefficient and agreement of extracted motion trajectories.

  5. A new quality assessment and improvement system for print media

    NASA Astrophysics Data System (ADS)

    Liu, Mohan; Konya, Iuliu; Nandzik, Jan; Flores-Herr, Nicolas; Eickeler, Stefan; Ndjiki-Nya, Patrick

    2012-12-01

    Print media collections of considerable size are held by cultural heritage organizations and will soon be subject to digitization activities. However, technical content quality management in digitization workflows strongly relies on human monitoring. This heavy human intervention is cost intensive and time consuming, which makes automization mandatory. In this article, a new automatic quality assessment and improvement system is proposed. The digitized source image and color reference target are extracted from the raw digitized images by an automatic segmentation process. The target is evaluated by a reference-based algorithm. No-reference quality metrics are applied to the source image. Experimental results are provided to illustrate the performance of the proposed system. We show that it features a good performance in the extraction as well as in the quality assessment step compared to the state-of-the-art. The impact of efficient and dedicated quality assessors on the optimization step is extensively documented.

  6. Assessing Air Quality.

    ERIC Educational Resources Information Center

    Bloomfield, Molly

    2000-01-01

    Introduces the Science and Math Investigative Learning Experiences (SMILE) program. Presents an air quality problem as an example of an integrated challenge problem activity developed by the SMILE program. Explains the process of challenge problems and provides a list of the National Science Education Standards addressed by challenge problems.…

  7. Irrigation water quality assessments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Increasing demands on fresh water supplies by municipal and industrial users means decreased fresh water availability for irrigated agriculture in semi arid and arid regions. There is potential for agricultural use of treated wastewaters and low quality waters for irrigation but this will require co...

  8. CART IV: improving automatic camouflage assessment with assistance methods

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2010-04-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007, SPIE 2008 and SPIE 2009 [1], [2], [3]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors. The conspicuity of camouflaged objects due to their movement can be assessed with a purpose-built processing method named MTI snail track algorithm. This paper presents the enhancements over the recent year and addresses procedures to assist the camouflage assessment of moving objects for image data material with strong noise or image artefacts. This extends the evaluation methods significantly to a broader application range. For example, some noisy infrared image data material can be evaluated for the first time by applying the presented methods which fathom the correlations between camouflage assessment, MTI (moving target indication) and dedicated noise filtering.

  9. Institutional Consequences of Quality Assessment

    ERIC Educational Resources Information Center

    Joao Rosa, Maria; Tavares, Diana; Amaral, Alberto

    2006-01-01

    This paper analyses the opinions of Portuguese university rectors and academics on the quality assessment system and its consequences at the institutional level. The results obtained show that university staff (rectors and academics, with more of the former than the latter) held optimistic views of the positive consequences of quality assessment…

  10. Quality assessment of urban environment

    NASA Astrophysics Data System (ADS)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  11. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  12. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  13. Educational Quality Assessment. 1986 Data.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Div. of Educational Testing and Evaluation.

    This manual contains the statistics generated from the Pennsylvania Quality Education Assessment (QEA) administered in 1986. The assessment was to evaluate the achievement of certain educational objectives in grades 4, 6, 7, 9, and 11 in the public schools. The tests were in the following areas: reading comprehension, writing skills, mathematics,…

  14. Objective view synthesis quality assessment

    NASA Astrophysics Data System (ADS)

    Conze, Pierre-Henri; Robert, Philippe; Morin, Luce

    2012-03-01

    View synthesis brings geometric distortions which are not handled efficiently by existing image quality assessment metrics. Despite the widespread of 3-D technology and notably 3D television (3DTV) and free-viewpoints television (FTV), the field of view synthesis quality assessment has not yet been widely investigated and new quality metrics are required. In this study, we propose a new full-reference objective quality assessment metric: the View Synthesis Quality Assessment (VSQA) metric. Our method is dedicated to artifacts detection in synthesized view-points and aims to handle areas where disparity estimation may fail: thin objects, object borders, transparency, variations of illumination or color differences between left and right views, periodic objects... The key feature of the proposed method is the use of three visibility maps which characterize complexity in terms of textures, diversity of gradient orientations and presence of high contrast. Moreover, the VSQA metric can be defined as an extension of any existing 2D image quality assessment metric. Experimental tests have shown the effectiveness of the proposed method.

  15. Salient motion features for video quality assessment.

    PubMed

    Ćulibrk, Dubravko; Mirković, Milan; Zlokolica, Vladimir; Pokrić, Maja; Crnojević, Vladimir; Kukolj, Dragan

    2011-04-01

    Design of algorithms that are able to estimate video quality as perceived by human observers is of interest for a number of applications. Depending on the video content, the artifacts introduced by the coding process can be more or less pronounced and diversely affect the quality of videos, as estimated by humans. While it is well understood that motion affects both human attention and coding quality, this relationship has only recently started gaining attention among the research community, when video quality assessment (VQA) is concerned. In this paper, the effect of calculating several objective measure features, related to video coding artifacts, separately for salient motion and other regions of the frames of the sequence is examined. In addition, we propose a new scheme for quality assessment of coded video streams, which takes into account salient motion. Standardized procedure has been used to calculate the Mean Opinion Score (MOS), based on experiments conducted with a group of non-expert observers viewing standard definition (SD) sequences. MOS measurements were taken for nine different SD sequences, coded using MPEG-2 at five different bit-rates. Eighteen different published approaches related to measuring the amount of coding artifacts objectively on a single-frame basis were implemented. Additional features describing the intensity of salient motion in the frames, as well as the intensity of coding artifacts in the salient motion regions were proposed. Automatic feature selection was performed to determine the subset of features most correlated to video quality. The results show that salient-motion-related features enhance prediction and indicate that the presence of blocking effect artifacts and blurring in the salient regions and variance and intensity of temporal changes in non-salient regions influence the perceived video quality. PMID:20876020

  16. Automatic graphene transfer system for improved material quality and efficiency

    PubMed Central

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  17. Automatic graphene transfer system for improved material quality and efficiency

    NASA Astrophysics Data System (ADS)

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-02-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications.

  18. Automatic graphene transfer system for improved material quality and efficiency.

    PubMed

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  19. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    SciTech Connect

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  20. Data Quality Assessment for Maritime Situation Awareness

    NASA Astrophysics Data System (ADS)

    Iphar, C.; Napoli, A.; Ray, C.

    2015-08-01

    The Automatic Identification System (AIS) initially designed to ensure maritime security through continuous position reports has been progressively used for many extended objectives. In particular it supports a global monitoring of the maritime domain for various purposes like safety and security but also traffic management, logistics or protection of strategic areas, etc. In this monitoring, data errors, misuse, irregular behaviours at sea, malfeasance mechanisms and bad navigation practices have inevitably emerged either by inattentiveness or voluntary actions in order to circumvent, alter or exploit such a system in the interests of offenders. This paper introduces the AIS system and presents vulnerabilities and data quality assessment for decision making in maritime situational awareness cases. The principles of a novel methodological approach for modelling, analysing and detecting these data errors and falsification are introduced.

  1. Assessment of automatic ligand building in ARP/wARP

    SciTech Connect

    Evrard, Guillaume X. Langer, Gerrit G.; Lamzin, Victor S.

    2007-01-01

    The performance of the ligand-building module of the ARP/wARP software suite is assessed through a large-scale test on known protein–ligand complexes. The results provide a detailed benchmark and guidelines for future improvements. The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein–ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.

  2. A routine quality assurance test for CT automatic exposure control systems.

    PubMed

    Iball, Gareth R; Moore, Alexis C; Crawford, Elizabeth J

    2016-01-01

    The study purpose was to develop and validate a quality assurance test for CT automatic exposure control (AEC) systems based on a set of nested polymethylmethacrylate CTDI phantoms. The test phantom was created by offsetting the 16 cm head phantom within the 32 cm body annulus, thus creating a three part phantom. This was scanned at all acceptance, routine, and some nonroutine quality assurance visits over a period of 45 months, resulting in 115 separate AEC tests on scanners from four manufacturers. For each scan the longitudinal mA modulation pattern was generated and measurements of image noise were made in two annular regions of interest. The scanner displayed CTDIvol and DLP were also recorded. The impact of a range of AEC configurations on dose and image quality were assessed at acceptance testing. For systems that were tested more than once, the percentage of CTDIvol values exceeding 5%, 10%, and 15% deviation from baseline was 23.4%, 12.6%, and 8.1% respectively. Similarly, for the image noise data, deviations greater than 2%, 5%, and 10% from baseline were 26.5%, 5.9%, and 2%, respectively. The majority of CTDIvol and noise deviations greater than 15% and 5%, respectively, could be explained by incorrect phantom setup or protocol selection. Barring these results, CTDIvol deviations of greater than 15% from baseline were found in 0.9% of tests and noise deviations greater than 5% from baseline were found in 1% of tests. The phantom was shown to be sensitive to changes in AEC setup, including the use of 3D, longitudinal or rotational tube current modulation. This test methodology allows for continuing performance assessment of CT AEC systems, and we recommend that this test should become part of routine CT quality assurance programs. Tolerances of ± 15% for CTDIvol and ± 5% for image noise relative to baseline values should be used. PMID:27455490

  3. EQA: Educational Quality Assessment. Commentary.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Div. of Educational Testing and Evaluation.

    In response to a prevailing demand for better quality education in public schools in Pennsylvania, procedures were developed to measure the adequacy and the efficiency of the educational programs in public schools. The objectives currently assessed are in the following areas: communication skills (reading and writing), mathematics, self-esteem,…

  4. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  5. Quality assessment tools add value.

    PubMed

    Paul, L

    1996-10-01

    The rapid evolution of the health care marketplace can be expected to continue as we move closer to the 21st Century. Externally-imposed pressures for cost reduction will increasingly be accompanied by pressure within health care organizations as risk-sharing reimbursement arrangements become more commonplace. Competitive advantage will be available to those organizations that can demonstrate objective value as defined by the cost-quality equation. The tools an organization chooses to perform quality assessment will be an important factor in its ability to demonstrate such value. Traditional quality assurance will in all likelihood continue, but the extent to which quality improvement activities are adopted by the culture of an organization may determine its ability to provide objective evidence of better health status outcomes. PMID:10162486

  6. Assessing risks to ecosystem quality

    SciTech Connect

    Barnthouse, L.W.

    1995-12-31

    Ecosystems are not organisms. Because ecosystems do not reproduce, grow old or sick, and die, the term ecosystem health is somewhat misleading and perhaps should not be used. A more useful concept is ``ecosystem quality,`` which denotes a set of desirable ecosystem characteristics defined in terms of species composition, productivity, size/condition of specific populations, or other measurable properties. The desired quality of an ecosystem may be pristine, as in a nature preserve, or highly altered by man, as in a managed forest or navigational waterway. ``Sustainable development`` implies that human activities that influence ecosystem quality should be managed so that high-quality ecosystems are maintained for future generations. In sustainability-based environmental management, the focus is on maintaining or improving ecosystem quality, not on restricting discharges or requiring particular waste treatment technologies. This approach requires management of chemical impacts to be integrated with management of other sources of stress such as erosion, eutrophication, and direct human exploitation. Environmental scientists must (1) work with decision makers and the public to define ecosystem quality goals, (2) develop corresponding measures of ecosystem quality, (3) diagnose causes for departures from desired states, and (4) recommend appropriate restoration actions, if necessary. Environmental toxicology and chemical risk assessment are necessary for implementing the above framework, but they are clearly not sufficient. This paper reviews the state-of-the science relevant to sustaining the quality of aquatic ecosystems. Using the specific example of a reservoir in eastern Tennessee, the paper attempts to define roles for ecotoxicology and risk assessment in each step of the management process.

  7. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  8. SIMULATING LOCAL DENSE AREAS USING PMMA TO ASSESS AUTOMATIC EXPOSURE CONTROL IN DIGITAL MAMMOGRAPHY.

    PubMed

    Bouwman, R W; Binst, J; Dance, D R; Young, K C; Broeders, M J M; den Heeten, G J; Veldkamp, W J H; Bosmans, H; van Engen, R E

    2016-06-01

    Current digital mammography (DM) X-ray systems are equipped with advanced automatic exposure control (AEC) systems, which determine the exposure factors depending on breast composition. In the supplement of the European guidelines for quality assurance in breast cancer screening and diagnosis, a phantom-based test is included to evaluate the AEC response to local dense areas in terms of signal-to-noise ratio (SNR). This study evaluates the proposed test in terms of SNR and dose for four DM systems. The glandular fraction represented by the local dense area was assessed by analytic calculations. It was found that the proposed test simulates adipose to fully glandular breast compositions in attenuation. The doses associated with the phantoms were found to match well with the patient dose distribution. In conclusion, after some small adaptations, the test is valuable for the assessment of the AEC performance in terms of both SNR and dose. PMID:26977073

  9. Network Design and Quality Checks in Automatic Orientation of Close-Range Photogrammetric Blocks

    PubMed Central

    Dall’Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  10. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  11. Blind image quality assessment through anisotropy.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2007-12-01

    We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Rényi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio. PMID:18059913

  12. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  13. APPLICATION OF AUTOMATIC DIFFERENTIATION FOR STUDYING THE SENSITIVITY OF NUMERICAL ADVECTION SCHEMES IN AIR QUALITY MODELS

    EPA Science Inventory

    In any simulation model, knowing the sensitivity of the system to the model parameters is of utmost importance. s part of an effort to build a multiscale air quality modeling system for a high performance computing and communication (HPCC) environment, we are exploring an automat...

  14. External quality assessment: best practice.

    PubMed

    James, David; Ames, Darren; Lopez, Berenice; Still, Rachel; Simpson, Wiliam; Twomey, Patrick

    2014-08-01

    There is a requirement for accredited laboratories to participate in external quality assessment (EQA) schemes, but there is wide variation in understanding as to what is required by the laboratories and scheme providers in fulfilling this. This is not helped by a diversity of language used in connection with EQA; Proficiency testing (PT), EQA schemes, and EQA programmes, each of which have different meanings and offerings in the context of improving laboratory quality. We examine these differences, and identify what factors are important in supporting quality within a clinical laboratory and what should influence the choice of EQA programme. Equally as important is how EQA samples are handled within the laboratory, and how the information provided by the EQA programme is used. EQA programmes are a key element of a laboratory's quality assurance framework, but laboratories should have an understanding of what their EQA programmes are capable of demonstrating, how they should be used within the laboratory, and how they support quality. EQA providers should be clear as to what type of programme they provide - PT, EQA Scheme or EQA Programme. PMID:24621574

  15. Quality of Life Effects of Automatic External Defibrillators in the Home: Results from the Home Automatic External Defibrillator Trial (HAT)

    PubMed Central

    Mark, Daniel B.; Anstrom, Kevin J.; McNulty, Steven E.; Flaker, Greg C.; Tonkin, Andrew M.; Smith, Warren M.; Toff, William D.; Dorian, Paul; Clapp-Channing, Nancy E.; Anderson, Jill; Johnson, George; Schron, Eleanor B.; Poole, Jeanne E.; Lee, Kerry L.; Bardy, Gust H.

    2010-01-01

    Background Public access automatic external defibrillators (AEDs) can save lives, but most deaths from out-of-hospital sudden cardiac arrest occur at home. The Home Automatic External Defibrillator Trial (HAT) found no survival advantage for adding a home AED to cardiopulmonary resuscitation (CPR) training for 7001 patients with a prior anterior wall myocardial infarction. Quality of life (QOL) outcomes for both the patient and spouse/companion were secondary endpoints. Methods A subset of 1007 study patients and their spouse/companions was randomly selected for ascertainment of QOL by structured interview at baseline and 12 and 24 months following enrollment. The primary QOL measures were the Medical Outcomes Study 36-Item Short-Form (SF-36) psychological well-being (reflecting anxiety and depression) and vitality (reflecting energy and fatigue) subscales. Results For patients and spouse/companions, the psychological well-being and vitality scales did not differ significantly between those randomly assigned an AED plus CPR training and controls who received CPR training only. None of the other QOL measures collected showed a clinically and statistically significant difference between treatment groups. Patients in the AED group were more likely to report being extremely or quite a bit reassured by their treatment assignment. Spouse/companions in the AED group reported being less often nervous about the possibility of using AED/CPR treatment than those in the CPR group. Conclusions Adding access to a home AED to CPR training did not affect quality of life either for patients with a prior anterior myocardial infarction or their spouse/companion but did provide more reassurance to the patients without increasing anxiety for spouse/companions. PMID:20362722

  16. Orion Entry Handling Qualities Assessments

    NASA Technical Reports Server (NTRS)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  17. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  18. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  19. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  20. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  1. Carbon Nanotube Material Quality Assessment

    NASA Technical Reports Server (NTRS)

    Yowell, Leonard; Arepalli, Sivaram; Sosa, Edward; Niolaev, Pavel; Gorelik, Olga

    2006-01-01

    The nanomaterial activities at NASA Johnson Space Center focus on carbon nanotube production, characterization and their applications for aerospace systems. Single wall carbon nanotubes are produced by arc and laser methods. Characterization of the nanotube material is performed using the NASA JSC protocol developed by combining analytical techniques of SEM, TEM, UV-VIS-NIR absorption, Raman, and TGA. A possible addition of other techniques such as XPS, and ICP to the existing protocol will be discussed. Changes in the quality of the material collected in different regions of the arc and laser production chambers is assessed using the original JSC protocol. The observed variations indicate different growth conditions in different regions of the production chambers.

  2. Assessing the Need for Referral in Automatic Diabetic Retinopathy Detection.

    PubMed

    Pires, Ramon; Jelinek, Herbert F; Wainer, Jacques; Goldenstein, Siome; Valle, Eduardo; Rocha, Anderson

    2013-12-01

    Emerging technologies in health care aim at reducing unnecessary visits to medical specialists, minimizing overall cost of treatment and optimizing the number of patients seen by each doctor. This paper explores image recognition for the screening of diabetic retinopathy, a complication of diabetes that can lead to blindness if not discovered in its initial stages. Many previous reports on DR imaging focus on the segmentation of the retinal image, on quality assessment, and on the analysis of presence of DR-related lesions. Although this study has advanced the detection of individual DR lesions from retinal images, the simple presence of any lesion is not enough to decide on the need for referral of a patient. Deciding if a patient should be referred to a doctor is an essential requirement for the deployment of an automated screening tool for rural and remote communities. We introduce an algorithm to make that decision based on the fusion of results by metaclassification. The input of the metaclassifier is the output of several lesion detectors, creating a powerful high-level feature representation for the retinal images. We explore alternatives for the bag-of-visual-words (BoVW)-based lesion detectors, which critically depends on the choices of coding and pooling the low-level local descriptors. The final classification approach achieved an area under the curve of 93.4% using SOFT-MAX BoVW (soft-assignment coding/max pooling), without the need of normalizing the high-level feature vector of scores. PMID:23963192

  3. Towards Quality Assessment in an EFL Programme

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  4. Perceptual Quality Assessment for Multi-Exposure Image Fusion.

    PubMed

    Ma, Kede; Zeng, Kai; Wang, Zhou

    2015-11-01

    Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. PMID:26068317

  5. Healthcare quality maturity assessment model based on quality drivers.

    PubMed

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter. PMID:27120510

  6. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    ERIC Educational Resources Information Center

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of…

  7. Assessment and Quality Social Studies

    ERIC Educational Resources Information Center

    Savage, Tom V.

    2003-01-01

    Those anonymous individuals who develop high-stakes tests by which educational quality is measured exercise great influence in defining educational quality. In this article, the author examines the impact of high-stakes testing on the welfare of the children and the quality of social studies instruction. He presents the benefits and drawbacks of…

  8. Automatic quality improvement reports in the intensive care unit: One step closer toward meaningful use

    PubMed Central

    Dziadzko, Mikhail A; Thongprayoon, Charat; Ahmed, Adil; Tiong, Ing C; Li, Man; Brown, Daniel R; Pickering, Brian W; Herasevich, Vitaly

    2016-01-01

    AIM: To examine the feasibility and validity of electronic generation of quality metrics in the intensive care unit (ICU). METHODS: This minimal risk observational study was performed at an academic tertiary hospital. The Critical Care Independent Multidisciplinary Program at Mayo Clinic identified and defined 11 key quality metrics. These metrics were automatically calculated using ICU DataMart, a near-real time copy of all ICU electronic medical record (EMR) data. The automatic report was compared with data from a comprehensive EMR review by a trained investigator. Data was collected for 93 randomly selected patients admitted to the ICU during April 2012 (10% of admitted adult population). This study was approved by the Mayo Clinic Institution Review Board. RESULTS: All types of variables needed for metric calculations were found to be available for manual and electronic abstraction, except information for availability of free beds for patient-specific time-frames. There was 100% agreement between electronic and manual data abstraction for ICU admission source, admission service, and discharge disposition. The agreement between electronic and manual data abstraction of the time of ICU admission and discharge were 99% and 89%. The time of hospital admission and discharge were similar for both the electronically and manually abstracted datasets. The specificity of the electronically-generated report was 93% and 94% for invasive and non-invasive ventilation use in the ICU. One false-positive result for each type of ventilation was present. The specificity for ICU and in-hospital mortality was 100%. Sensitivity was 100% for all metrics. CONCLUSION: Our study demonstrates excellent accuracy of electronically-generated key ICU quality metrics. This validates the feasibility of automatic metric generation. PMID:27152259

  9. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  10. Automatic assessment of scintmammographic images using a novelty filter.

    PubMed Central

    Costa, M.; Moura, L.

    1995-01-01

    99mTc-sestamibi scintmammograms provide a powerful non-invasive means for detecting breast cancer at early stages. This paper describes an automatic method for detecting breast tumors in such mammograms. The proposed method not only detects tumors but also classifies non-tumor images as "normal" or "diffuse increased uptake" mammograms. The detection method makes use of Kohonen's "novelty filter". In this technique an orthogonal vector basis is created from a normal set of images. Test images presented to the detection method are described as a linear combination of the images in the vector basis. Assuming that the image basis is representative of normal patterns, then it can be expected that there should be no major differences between a normal test image and its corresponding linear combination image. However, if the test image presents an abnormal pattern, then it is expected that the "abnormalities" will show as the difference between the original test image and the image built from the vector basis. In other words, the existing abnormality cannot be explained by the set of normal images and comes up as a "novelty." An important part of the proposed method are the steps taken for standardizing images before they can be used as part of the vector basis. Standardization is the keystone to the success of the proposed method, as the novelty filter is very sensitive to changes in shape and alignment. Images Figure 1 Figure 2 PMID:8563342

  11. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    NASA Astrophysics Data System (ADS)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  12. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  13. Assessing Quality in Home Visiting Programs

    ERIC Educational Resources Information Center

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  14. SERI QC Solar Data Quality Assessment Software

    Energy Science and Technology Software Center (ESTSC)

    1994-12-31

    SERI QC is a mathematical software package that assesses the quality of solar radiation data. The SERI QC software is a function written in the C programming language. IT IS NOT A STANDALONE SOFTWARE APPLICATION. The user must write the calling application that requires quality assessment of solar data. The C function returns data quality flags to the calling program. A companion program, QCFIT, is a standalone Windows application that provides support files for themore » SERI QC function (data quality boundaries). The QCFIT software can also be used as an analytical tool for visualizing solar data quality independent of the SERI QC function.« less

  15. SERI QC Solar Data Quality Assessment Software

    SciTech Connect

    1994-12-31

    SERI QC is a mathematical software package that assesses the quality of solar radiation data. The SERI QC software is a function written in the C programming language. IT IS NOT A STANDALONE SOFTWARE APPLICATION. The user must write the calling application that requires quality assessment of solar data. The C function returns data quality flags to the calling program. A companion program, QCFIT, is a standalone Windows application that provides support files for the SERI QC function (data quality boundaries). The QCFIT software can also be used as an analytical tool for visualizing solar data quality independent of the SERI QC function.

  16. Automatic Severity Assessment of Dysarthria using State-Specific Vectors.

    PubMed

    Sriranjani, R; Umesh, S; Reddy, M Ramasubba

    2015-01-01

    In this paper, a novel approach to assess the severity of the dysarthria using state-specific vector (SSV) of phone-cluster adaptive training (phone-CAT) acoustic modeling technique is proposed. The dominant component of the SSV represents the actual pronunciations of a speaker. Comparing the dominant component for unimpaired and each dysarthric speaker, a phone confusion matrix is formed. The diagonal elements of the matrix capture the number of correct pronunciations for each dysarthric speaker. As the degree of impairment increases, the number of phones correctly pronounced by the speaker decreases. Thus the trace of the confusion matrix can be used as objective cue to assess di?erent severity levels of dysarthria based on a threshold rule. Our proposed objective measure correlates with the standard Frenchay dysarthric assessment scores by 74 % on Nemours database. The measure also correlates with the intelligibility scores by 82 % on universal access dysarthric speech database. PMID:25996705

  17. Assessing Mathematics Automatically Using Computer Algebra and the Internet

    ERIC Educational Resources Information Center

    Sangwin, Chris

    2004-01-01

    This paper reports some recent developments in mathematical computer-aided assessment which employs computer algebra to evaluate students' work using the Internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university…

  18. Cell Processing Engineering for Regenerative Medicine : Noninvasive Cell Quality Estimation and Automatic Cell Processing.

    PubMed

    Takagi, Mutsumi

    2016-01-01

    The cell processing engineering including automatic cell processing and noninvasive cell quality estimation of adherent mammalian cells for regenerative medicine was reviewed. Automatic cell processing necessary for the industrialization of regenerative medicine was introduced. The cell quality such as cell heterogeneity should be noninvasively estimated before transplantation to patient, because cultured cells are usually not homogeneous but heterogeneous and most protocols of regenerative medicine are autologous system. The differentiation level could be estimated by two-dimensional cell morphology analysis using a conventional phase-contrast microscope. The phase-shifting laser microscope (PLM) could determine laser phase shift at all pixel in a view, which is caused by the transmitted laser through cell, and might be more noninvasive and more useful than the atomic force microscope and digital holographic microscope. The noninvasive determination of the laser phase shift of a cell using a PLM was carried out to determine the three-dimensional cell morphology and estimate the cell cycle phase of each adhesive cell and the mean proliferation activity of a cell population. The noninvasive discrimination of cancer cells from normal cells by measuring the phase shift was performed based on the difference in cytoskeleton density. Chemical analysis of the culture supernatant was also useful to estimate the differentiation level of a cell population. A probe beam, an infrared beam, and Raman spectroscopy are useful for diagnosing the viability, apoptosis, and differentiation of each adhesive cell. PMID:25373455

  19. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  20. Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775

    ERIC Educational Resources Information Center

    Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard

    2010-01-01

    Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic" performance…

  1. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. PMID:27268736

  2. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  3. Automatic assessment of macular edema from color retinal images.

    PubMed

    Deepak, K Sai; Sivaswamy, Jayanthi

    2012-03-01

    Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution. PMID:22167598

  4. Towards Automatic Diabetes Case Detection and ABCS Protocol Compliance Assessment

    PubMed Central

    Mishra, Ninad K.; Son, Roderick Y.; Arnzen, James J.

    2012-01-01

    Objective According to the American Diabetes Association, the implementation of the standards of care for diabetes has been suboptimal in most clinical settings. Diabetes is a disease that had a total estimated cost of $174 billion in 2007 for an estimated diabetes-affected population of 17.5 million in the United States. With the advent of electronic medical records (EMR), tools to analyze data residing in the EMR for healthcare surveillance can help reduce the burdens experienced today. This study was primarily designed to evaluate the efficacy of employing clinical natural language processing to analyze discharge summaries for evidence indicating a presence of diabetes, as well as to assess diabetes protocol compliance and high risk factors. Methods Three sets of algorithms were developed to analyze discharge summaries for: (1) identification of diabetes, (2) protocol compliance, and (3) identification of high risk factors. The algorithms utilize a common natural language processing framework that extracts relevant discourse evidence from the medical text. Evidence utilized in one or more of the algorithms include assertion of the disease and associated findings in medical text, as well as numerical clinical measurements and prescribed medications. Results The diabetes classifier was successful at classifying reports for the presence and absence of diabetes. Evaluated against 444 discharge summaries, the classifier’s performance included macro and micro F-scores of 0.9698 and 0.9865, respectively. Furthermore, the protocol compliance and high risk factor classifiers showed promising results, with most F-measures exceeding 0.9. Conclusions The presented approach accurately identified diabetes in medical discharge summaries and showed promise with regards to assessment of protocol compliance and high risk factors. Utilizing free-text analytic techniques on medical text can complement clinical-public health decision support by identifying cases and high risk

  5. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  6. Statistical quality assessment of a fingerprint

    NASA Astrophysics Data System (ADS)

    Hwang, Kyungtae

    2004-08-01

    The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.

  7. Automated FMV image quality assessment based on power spectrum statistics

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  8. Assessing quality across healthcare subsystems in Mexico.

    PubMed

    Puig, Andrea; Pagán, José A; Wong, Rebeca

    2009-01-01

    Recent healthcare reform efforts in Mexico have focused on the need to improve the efficiency and equity of a fragmented healthcare system. In light of these reform initiatives, there is a need to assess whether healthcare subsystems are effective at providing high-quality healthcare to all Mexicans. Nationally representative household survey data from the 2006 Encuesta Nacional de Salud y Nutrición (National Health and Nutrition Survey) were used to assess perceived healthcare quality across different subsystems. Using a sample of 7234 survey respondents, we found evidence of substantial heterogeneity in healthcare quality assessments across healthcare subsystems favoring private providers over social security institutions. These differences across subsystems remained even after adjusting for socioeconomic, demographic, and health factors. Our analysis suggests that improvements in efficiency and equity can be achieved by assessing the factors that contribute to heterogeneity in quality across subsystems. PMID:19305224

  9. Quality Assessment in the Blog Space

    ERIC Educational Resources Information Center

    Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan

    2010-01-01

    Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…

  10. Quality indicators and quality assessment in child health

    PubMed Central

    Kavanagh, Patricia L.; Adams, William G.; Wang, C. Jason

    2009-01-01

    Quality indicators are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes. In this review, we highlight the range and type of indicators that have been developed for children in the UK and US by prominent governmental agencies and private organizations. We also classify these indicators in an effort to identify areas of child health that may lack quality measurement activity. We review the current state of health information technology in both countries since these systems are vital to quality efforts. Finally, we propose several recommendations to advance the quality indicator development agenda for children. The convergence of quality measurement and indicator development, a growing scientific evidence base and integrated information systems in healthcare may lead to substantial improvements for child health in the 21st century. PMID:19307196

  11. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  12. Soil Quality Assessment: Past, Present, and Future

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality assessment can help land owners and managers appreciate the multiple functions that soils perform and thus improve the resource management decisions they make. Our objective is to show how the Soil Management Assessment Framework (SMAF) can complement the Soil Conditioning Index (SCI) a...

  13. National Water-Quality Assessment Program - Source Water-Quality Assessments

    USGS Publications Warehouse

    Delzer, Gregory C.; Hamilton, Pixie A.

    2007-01-01

    In 2002, the National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) implemented Source Water-Quality Assessments (SWQAs) to characterize the quality of selected rivers and aquifers used as a source of supply to community water systems in the United States. These assessments are intended to complement drinking-water monitoring required by Federal, State, and local programs, which focus primarily on post-treatment compliance monitoring.

  14. ANSS Backbone Station Quality Assessment

    NASA Astrophysics Data System (ADS)

    Leeds, A.; McNamara, D.; Benz, H.; Gee, L.

    2006-12-01

    In this study we assess the ambient noise levels of the broadband seismic stations within the United States Geological Survey's (USGS) Advanced National Seismic System (ANSS) backbone network. The backbone consists of stations operated by the USGS as well as several regional network stations operated by universities. We also assess the improved detection capability of the network due to the installation of 13 additional backbone stations and the upgrade of 26 existing stations funded by the Earthscope initiative. This assessment makes use of probability density functions (PDF) of power spectral densities (PSD) (after McNamara and Buland, 2004) computed by a continuous noise monitoring system developed by the USGS- ANSS and the Incorporated Research Institutions in Seismology (IRIS) Data Management Center (DMC). We compute the median and mode of the PDF distribution and rank the stations relative to the Peterson Low noise model (LNM) (Peterson, 1993) for 11 different period bands. The power of the method lies in the fact that there is no need to screen the data for system transients, earthquakes or general data artifacts since they map into a background probability level. Previous studies have shown that most regional stations, instrumented with short period or extended short period instruments, have a higher noise level in all period bands while stations in the US network have lower noise levels at short periods (0.0625-8.0 seconds), high frequencies (8.0- 0.125Hz). The overall network is evaluated with respect to accomplishing the design goals set for the USArray/ANSS backbone project which were intended to increase broadband performance for the national monitoring network.

  15. Water quality assessment in Ecuador

    SciTech Connect

    Chudy, J.P.; Arniella, E.; Gil, E.

    1993-02-01

    The El Tor cholera pandemic arrived in Ecuador in March 1991, and through the course of the year caused 46,320 cases, of which 692 resulted in death. Most of the cases were confined to cities along Ecuador's coast. The Water and Sanitation for Health Project (WASH), which was asked to participate in the review of this request, suggested that a more comprehensive approach should be taken to cholera control and prevention. The approach was accepted, and a multidisciplinary team consisting of a sanitary engineer, a hygiene education specialist, and an institutional specialist was scheduled to carry out the assessment in late 1992 following the national elections.

  16. Automatic assessment of the motor state of the Parkinson's disease patient--a case study

    PubMed Central

    2012-01-01

    This paper presents a novel methodology in which the Unified Parkinson's Disease Rating Scale (UPDRS) data processed with a rule-based decision algorithm is used to predict the state of the Parkinson's Disease patients. The research was carried out to investigate whether the advancement of the Parkinson's Disease can be automatically assessed. For this purpose, past and current UPDRS data from 47 subjects were examined. The results show that, among other classifiers, the rough set-based decision algorithm turned out to be most suitable for such automatic assessment. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1563339375633634. PMID:22340508

  17. [Radiological assessment of bone quality].

    PubMed

    Ito, Masako

    2016-01-01

    Structural property of bone includes micro- or nano-structural property of the trabecular and cortical bone, and macroscopic geometry. Radiological technique is useful to analyze the bone structural property;micro-CT or synchrotron-CT is available to analyze micro- or nano-structural property of bone samples ex vivo, and multi-detector row CT(MDCT)or high-resolution peripheral QCT(HR-pQCT)is available to analyze human bone in vivo. For the analysis of hip geometry, CT-based hip structure analysis(HSA)is available aw sell se radiography and DXA-based HSA. These structural parameters are related to biomechanical property, and these assessment tools provide information of pathological changes or the effects of anti-osteoporotic agents on bone. PMID:26728530

  18. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  19. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  20. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  1. Biosignal analysis to assess mental stress in automatic driving of trucks: palmar perspiration and masseter electromyography.

    PubMed

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  2. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  3. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  4. Phase congruency assesses hyperspectral image quality

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhong, Cheng

    2012-10-01

    Blind image quality assessment (QA) is a tough task especially for hyperspectral imagery which is degraded by noise, distortion, defocus, and other complex factors. Subjective hyperspectral imagery QA methods are basically measured the degradation of image from human perceptual visual quality. As the most important image quality measurement features, noise and blur, determined the image quality greatly, are employed to predict the objective hyperspectral imagery quality of each band. We demonstrate a novel no-reference hyperspectral imagery QA model based on phase congruency (PC), which is a dimensionless quantity and provides an absolute measure of the significance of feature point. First, Log Gabor wavelet is used to calculate the phase congruency of frequencies of each band image. The relationship between noise and PC can be derived from above transformation under the assumption that noise is additive. Second, PC focus measure evaluation model is proposed to evaluate blur caused by different amounts of defocus. The ratio and mean factors of edge blur level and noise is defined to assess the quality of each band image. This image QA method obtains excellent correlation with subjective image quality score without any reference. Finally, the PC information is utilized to improve the quality of some bands images.

  5. [Internal Quality Control and External Quality Assessment on POCT].

    PubMed

    Kuwa, Katsuhiko

    2015-02-01

    The quality management (QM) of POCT summarizes its internal quality control (IQC) and external quality assessment (EQA). For QM requirements in POCT, ISO 22870-Point-of-care testing (POCT) -Requirements for quality and competence and ISO 15189-Medical laboratories-Requirements for quality and competence, it is performed under the guidance of the QM committee. The role of the POC coordinator and/or medical technologist of the clinical laboratory is important. On measurement performance of POCT devices, it is necessary to confirm data on measurement performance from the manufacturer other than those in the inserted document. In the IQC program, the checking and control of measurement performance are the targets. On measurements of QC samples by the manufacturer, it is essential to check the function of devices. In addition, regarding the EQA program, in 2 neighboring facilities, there is an effect to confirm the current status of measurement and commutability assessment in these laboratories using whole blood along with residual blood samples from daily examinations in the clinical laboratory. PMID:26529974

  6. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    SciTech Connect

    Scholey, J; White, B; Qi, S; Low, D

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the image intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the clinical

  7. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  8. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    PubMed Central

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  9. Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto

    2012-02-01

    Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.

  10. An algorithm used for quality criterion automatic measurement of band-pass filters and its device implementation

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Liu, Yan; Yu, Feihong

    2013-08-01

    As a kind of film device, band-pass filter is widely used in pattern recognition, infrared detection, optical fiber communication, etc. In this paper, an algorithm for automatic measurement of band-pass filter quality criterion is proposed based on the proven theory calculation of derivate spectral transmittance of filter formula. Firstly, wavelet transform to reduce spectrum data noises is used. Secondly, combining with the Gaussian curve fitting and least squares method, the algorithm fits spectrum curve and searches the peak. Finally, some parameters for judging band-pass filter quality are figure out. Based on the algorithm, a pipeline for band-pass filters automatic measurement system has been designed that can scan the filter array automatically and display spectral transmittance of each filter. At the same time, the system compares the measuring result with the user defined standards to determine if the filter is qualified or not. The qualified product will be market with green color, and the unqualified product will be marked with red color. With the experiments verification, the automatic measurement system basically realized comprehensive, accurate and rapid measurement of band-pass filter quality and achieved the expected results.

  11. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  12. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  13. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Additional quality assessment activities. 460.140... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional quality assessment activities. A PACE organization must meet external quality assessment and reporting...

  14. Water quality issues and energy assessments

    SciTech Connect

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  15. An assessment model for quality management

    NASA Astrophysics Data System (ADS)

    Völcker, Chr.; Cass, A.; Dorling, A.; Zilioli, P.; Secchi, P.

    2002-07-01

    SYNSPACE together with InterSPICE and Alenia Spazio is developing an assessment method to determine the capability of an organisation in the area of quality management. The method, sponsored by the European Space Agency (ESA), is called S9kS (SPiCE- 9000 for SPACE). S9kS is based on ISO 9001:2000 with additions from the quality standards issued by the European Committee for Space Standardization (ECSS) and ISO 15504 - Process Assessments. The result is a reference model that supports the expansion of the generic process assessment framework provided by ISO 15504 to nonsoftware areas. In order to be compliant with ISO 15504, requirements from ISO 9001 and ECSS-Q-20 and Q-20-09 have been turned into process definitions in terms of Purpose and Outcomes, supported by a list of detailed indicators such as Practices, Work Products and Work Product Characteristics. In coordination with this project, the capability dimension of ISO 15504 has been revised to be consistent with ISO 9001. As contributions from ISO 9001 and the space quality assurance standards are separable, the stripped down version S9k offers organisations in all industries an assessment model based solely on ISO 9001, and is therefore interesting to all organisations, which intend to improve their quality management system based on ISO 9001.

  16. Recognition and Assessment of Teaching Quality.

    ERIC Educational Resources Information Center

    Fairbrother, Patricia

    1996-01-01

    Identifies models for consideration of teacher quality and competence in nursing education. Presents a range of evaluation criteria in these categories: preparation, delivery, innovation, communication, self-assessment, instructional management, peer recognition, professional memberships and service, publications, and grants and contracts secured.…

  17. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  18. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  19. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  20. [Making best use of external quality assessment].

    PubMed

    Fried, Roman

    2015-02-01

    To receive a maximum benefit from external quality assessment, the laboratory has to fulfill certain requirements. There have to be standard operating procedures and checklists for the correct sample processing and analysis. It is equally important, that the staff has a basic understanding how this quality-tools work and that detected errors are used as a chance to improve the processes within the laboratory. The benefit of surveys for external quality assessment is not only limited to the analytical phase, but also to some aspects of the pre and post-analytical processes. Due to the many participants, the survey providers are able to collect a lot of practical knowledge. All participants can learn from this by reading the survey reports and commentaries. With this, and with special educational surveys, the providers of surveys are able to offer an opportunity of continuous education to the laboratories. PMID:25630289

  1. Quality assessment: A performance-based approach to assessments

    SciTech Connect

    Caplinger, W.H.; Greenlee, W.D.

    1993-08-01

    Revision C to US Department of Energy (DOE) Order 5700.6 (6C) ``Quality Assurance`` (QA) brings significant changes to the conduct of QA. The Westinghouse government-owned, contractor-operated (GOCO) sites have updated their quality assurance programs to the requirements and guidance of 6C, and are currently implementing necessary changes. In late 1992, a Westinghouse GOCO team led by the Waste Isolation Division (WID) conducted what is believed to be the first assessment of implementation of a quality assurance program founded on 6C.

  2. Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease

    PubMed Central

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667

  3. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  4. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future. PMID:25412885

  5. Automated Assessment of the Quality of Depression Websites

    PubMed Central

    Tang, Thanh Tin; Hawking, David; Christensen, Helen

    2005-01-01

    Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than

  6. Bone age assessment in young children using automatic carpal bone feature extraction and support vector regression.

    PubMed

    Somkantha, Krit; Theera-Umpon, Nipon; Auephanwiriyakul, Sansanee

    2011-12-01

    Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists. PMID:21347746

  7. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    NASA Astrophysics Data System (ADS)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  8. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  9. Water Quality Assessment using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  10. Image quality assessment using multi-method fusion.

    PubMed

    Liu, Tsung-Jung; Lin, Weisi; Kuo, C-C Jay

    2013-05-01

    A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called "context-dependent MMF" (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases. PMID:23288335

  11. Quantitative assessment of computed radiography quality control parameters.

    PubMed

    Rampado, O; Isoardi, P; Ropolo, R

    2006-03-21

    Quality controls for testing the performance of computed radiography (CR) systems have been recommended by manufacturers and medical physicists' organizations. The purpose of this work was to develop a set of image processing tools for quantitative assessment of computed radiography quality control parameters. Automatic image analysis consisted in detecting phantom details, defining regions of interest and acquiring measurements. The tested performance characteristics included dark noise, uniformity, exposure calibration, linearity, low-contrast and spatial resolution, spatial accuracy, laser beam function and erasure thoroughness. CR devices from two major manufacturers were evaluated. We investigated several approaches to quantify the detector response uniformity. We developed methods to characterize the spatial accuracy and resolution properties across the entire image area, based on the Fourier analysis of the image of a fine wire mesh. The implemented methods were sensitive to local blurring and allowed us to detect a local distortion of 4% or greater in any part of an imaging plate. The obtained results showed that the developed image processing tools allow us to implement a quality control program for CR with short processing time and with absence of subjectivity in the evaluation of the parameters. PMID:16510964

  12. Automated data quality assessment of marine sensors.

    PubMed

    Timms, Greg P; de Souza, Paulo A; Reznik, Leon; Smith, Daniel V

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data's fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert. PMID:22163714

  13. Automated Data Quality Assessment of Marine Sensors

    PubMed Central

    Timms, Greg P.; de Souza, Paulo A.; Reznik, Leon; Smith, Daniel V.

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data’s fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert. PMID:22163714

  14. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  15. Teachers' Opinions on Quality Criteria for Competency Assessment Programs

    ERIC Educational Resources Information Center

    Baartman, Liesbeth K. J.; Bastiaens, Theo J.; Kirschner, Paul A.; Van der Vleuten, Cees P. M.

    2007-01-01

    Quality control policies towards Dutch vocational schools have changed dramatically because the government questioned examination quality. Schools must now demonstrate assessment quality to a new Examination Quality Center. Since teachers often design assessments, they must be involved in quality issues. This study therefore explores teachers'…

  16. Automatic Detection of Masses in Mammograms Using Quality Threshold Clustering, Correlogram Function, and SVM.

    PubMed

    de Nazaré Silva, Joberth; de Carvalho Filho, Antonio Oseas; Corrêa Silva, Aristófanes; Cardoso de Paiva, Anselmo; Gattass, Marcelo

    2015-06-01

    Breast cancer is the second most common type of cancer in the world. Several computer-aided detection and diagnosis systems have been used to assist health experts and to indicate suspect areas that would be difficult to perceive by the human eye; this approach has aided in the detection and diagnosis of cancer. The present work proposes a method for the automatic detection of masses in digital mammograms by using quality threshold (QT), a correlogram function, and the support vector machine (SVM). This methodology comprises the following steps: The first step is to perform preprocessing with a low-pass filter, which increases the scale of the contrast, and the next step is to use an enhancement to the wavelet transform with a linear function. After the preprocessing is segmentation using QT; then, we perform post-processing, which involves the selection of the best mass candidates. This step is performed by analyzing the shape descriptors through the SVM. For the stage that involves the extraction of texture features, we used Haralick descriptors and a correlogram function. In the classification stage, the SVM was again used for training, validation, and final test. The results were as follows: sensitivity 92.31 %, specificity 82.2 %, accuracy 83.53 %, mean rate of false positives per image 1.12, and area under the receiver operating characteristic (ROC) curve 0.8033. Breast cancer is notable for presenting the highest mortality rate in addition to one of the smallest survival rates after diagnosis. An early diagnosis means a considerable increase in the survival chance of the patients. The methodology proposed herein contributes to the early diagnosis and survival rate and, thus, proves to be a useful tool for specialists who attempt to anticipate the detection of masses. PMID:25277539

  17. Milk quality and automatic milking: fat globule size, natural creaming, and lipolysis.

    PubMed

    Abeni, F; Degano, L; Calza, F; Giangiacomo, R; Pirlo, G

    2005-10-01

    Thirty-eight Italian Friesian first-lactation cows were allocated to 2 groups to evaluate the effect of 1) an automatic milking system (AMS) vs. milking in a milking parlor (MP) on milk fat characteristics; and 2) milking interval (< or =480, 481 to 600, 601 to 720, and >720 min) on the same variables. Milk fat was analyzed for content (% vol/vol), natural creaming (% of fat), and free fatty acids (FFA, mEq/100 g of fat). Distribution of milk fat globule size was evaluated to calculate average fat globule diameter (d(1)), volume-surface average diameter (d(32)), specific globule surface area, and mean interglobular distance. Milk yield was recorded to calculate hourly milk and milk fat yield. Milking system had no effect on milk yield, milk fat content, and hourly milk fat yield. Milk from AMS had less natural creaming and more FFA content than milk from MP. Fat globule size, globular surface area, and interglobular distance were not affected by milking system per se. Afternoon MP milkings had more fat content and hourly milk fat yield than AMS milkings when milking interval was >480 min. Milk fat FFA content was greater in AMS milkings when milking interval was < or =480 min than in milkings from MP and from AMS when milking interval was >600 min. Milking interval did not affect fat globule size, expressed as d32. Results from this experiment indicate a limited effect of AMS per se on milk fat quality; a more important factor seems to be the increase in milking frequency, generally associated with AMS. PMID:16162526

  18. Automatic Vertebral Fracture Assessment System (AVFAS) for Spinal Pathologies Diagnosis Based on Radiograph X-Ray Images

    NASA Astrophysics Data System (ADS)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abd; Bin Abdul Hamid, Hamzaini; Ariffin, Ahmad Kamal

    Nowadays, medical imaging has become a major tool in many clinical trials. This is because the technology enables rapid diagnosis with visualization and quantitative assessment that facilitate health practitioners or professionals. Since the medical and healthcare sector is a vast industry that is very much related to every citizen's quality of life, the image based medical diagnosis has become one of the important service areas in this sector. As such, a medical diagnostic imaging (MDI) software tool for assessing vertebral fracture is being developed which we have named as AVFAS short for Automatic Vertebral Fracture Assessment System. The developed software system is capable of indexing, detecting and classifying vertebral fractures by measuring the shape and appearance of vertebrae of radiograph x-ray images of the spine. This paper describes the MDI software tool which consists of three main sub-systems known as Medical Image Training & Verification System (MITVS), Medical Image and Measurement & Decision System (MIMDS) and Medical Image Registration System (MIRS) in term of its functionality, performance, ongoing research and outstanding technical issues.

  19. Assessing the quality of cost management

    SciTech Connect

    Fayne, V.; McAllister, A.; Weiner, S.B.

    1995-12-31

    Managing environmental programs can be effective only when good cost and cost-related management practices are developed and implemented. The Department of Energy`s Office of Environmental Management (EM), recognizing this key role of cost management, initiated several cost and cost-related management activities including the Cost Quality Management (CQM) Program. The CQM Program includes an assessment activity, Cost Quality Management Assessments (CQMAs), and a technical assistance effort to improve program/project cost effectiveness. CQMAs provide a tool for establishing a baseline of cost-management practices and for measuring improvement in those practices. The result of the CQMA program is an organization that has an increasing cost-consciousness, improved cost-management skills and abilities, and a commitment to respond to the public`s concerns for both a safe environment and prudent budget outlays. The CQMA program is part of the foundation of quality management practices in DOE. The CQMA process has contributed to better cost and cost-related management practices by providing measurements and feedback; defining the components of a quality cost-management system; and helping sites develop/improve specific cost-management techniques and methods.

  20. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  1. Automatic brain tumour detection and neovasculature assessment with multiseries MRI analysis.

    PubMed

    Szwarc, Pawel; Kawa, Jacek; Rudzki, Marcin; Pietka, Ewa

    2015-12-01

    In this paper a novel multi-stage automatic method for brain tumour detection and neovasculature assessment is presented. First, the brain symmetry is exploited to register the magnetic resonance (MR) series analysed. Then, the intracranial structures are found and the region of interest (ROI) is constrained within them to tumour and peritumoural areas using the Fluid Light Attenuation Inversion Recovery (FLAIR) series. Next, the contrast-enhanced lesions are detected on the basis of T1-weighted (T1W) differential images before and after contrast medium administration. Finally, their vascularisation is assessed based on the Regional Cerebral Blood Volume (RCBV) perfusion maps. The relative RCBV (rRCBV) map is calculated in relation to a healthy white matter, also found automatically, and visualised on the analysed series. Three main types of brain tumours, i.e. HG gliomas, metastases and meningiomas have been subjected to the analysis. The results of contrast enhanced lesions detection have been compared with manual delineations performed independently by two experts, yielding 64.84% sensitivity, 99.89% specificity and 71.83% Dice Similarity Coefficient (DSC) for twenty analysed studies of subjects with brain tumours diagnosed. PMID:26183648

  2. Quality Assessment of Domesticated Animal Genome Assemblies

    PubMed Central

    Seemann, Stefan E.; Anthon, Christian; Palasca, Oana; Gorodkin, Jan

    2015-01-01

    The era of high-throughput sequencing has made it relatively simple to sequence genomes and transcriptomes of individuals from many species. In order to analyze the resulting sequencing data, high-quality reference genome assemblies are required. However, this is still a major challenge, and many domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data and genome assembly quality. We rank the genomes by their assembly quality and discuss the implications for genotype analyses. PMID:27279738

  3. Engineering studies related to Skylab program. [assessment of automatic gain control data

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.

    1973-01-01

    The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.

  4. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  5. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  6. Assessing the Quality of Bioforensic Signatures

    SciTech Connect

    Sego, Landon H.; Holmes, Aimee E.; Gosink, Luke J.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Brothers, Alan J.; Corley, Courtney D.; Tardiff, Mark F.

    2013-06-04

    We present a mathematical framework for assessing the quality of signature systems in terms of fidelity, cost, risk, and utility—a method we refer to as Signature Quality Metrics (SQM). We demonstrate the SQM approach by assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system consists of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated fifteen combinations of the signature system by removing one or more of the assays from the Bayes network. We demonstrated that SQM can be used to distinguish between the various combinations in terms of attributes of interest. The approach assisted in clearly identifying assays that were least informative, largely in part because they only could discriminate between very few culture media, and in particular, culture media that are rarely used. There are limitations associated with the data that were used to train and test the signature system. Consequently, our intent is not to draw formal conclusions regarding this particular bioforensic system, but rather to illustrate an analytical approach that could be useful in comparing one signature system to another.

  7. No-reference stereoscopic image quality assessment

    NASA Astrophysics Data System (ADS)

    Akhter, Roushain; Parvez Sazzad, Z. M.; Horita, Y.; Baltes, J.

    2010-02-01

    Display of stereo images is widely used to enhance the viewing experience of three-dimensional imaging and communication systems. In this paper, we propose a method for estimating the quality of stereoscopic images using segmented image features and disparity. This method is inspired by the human visual system. We believe the perceived distortion and disparity of any stereoscopic display is strongly dependent on local features, such as edge (non-plane) and non-edge (plane) areas. Therefore, a no-reference perceptual quality assessment is developed for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the blur within the block of images are evaluated in this method. Two subjective stereo image databases are used to evaluate the performance of our method. The subjective experiments results indicate our model has sufficient prediction performance.

  8. Bacteriological Assessment of Spoon River Water Quality

    PubMed Central

    Lin, Shundar; Evans, Ralph L.; Beuscher, Davis B.

    1974-01-01

    Data from a study of five stations on the Spoon River, Ill., during June 1971 through May 1973 were analyzed for compliance with Illinois Pollution Control Board's water quality standards of a geometric mean limitation of 200 fecal coliforms per 100 ml. This bacterial limit was achieved about 20% of the time during June 1971 through May 1972, and was never achieved during June 1972 through May 1973. Ratios of fecal coliform to total coliform are presented. By using fecal coliform-to-fecal streptococcus ratios to sort out fecal pollution origins, it was evident that a concern must be expressed not only for municipal wastewater effluents to the receiving stream, but also for nonpoint sources of pollution in assessing the bacterial quality of a stream. PMID:4604145

  9. Image quality assessment and human visual system

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  10. Quality assessment of strawberries (Fragaria species).

    PubMed

    Azodanlou, Ramin; Darbellay, Charly; Luisier, Jean-Luc; Villettaz, Jean-Claude; Amadò, Renato

    2003-01-29

    Several cultivars of strawberries (Fragaria sp.), grown under different conditions, were analyzed by both sensory and instrumental methods. The overall appreciation, as expressed by consumers, was mainly reflected by attributes such as sweetness and aroma. No strong correlation was obtained with odor, acidity, juiciness, or firmness. The sensory quality of strawberries can be assessed with a good level of confidence by measuring the total sugar level ( degrees Brix) and the total amount of volatile compounds. Sorting out samples using the score obtained with a hedonic test (called the "hedonic classification method") allowed the correlation between consumers' appreciation and instrumental data to be considerably strengthened. On the basis of the results obtained, a quality model was proposed. Quantitative GC-FID analyses were performed to determine the major aroma components of strawberries. Methyl butanoate, ethyl butanoate, methyl hexanoate, cis-3-hexenyl acetate, and linalool were identified as the most important compounds for the taste and aroma of strawberries. PMID:12537447

  11. Assessing Assessment Quality: Criteria for Quality Assurance in Design of (Peer) Assessment for Learning--A Review of Research Studies

    ERIC Educational Resources Information Center

    Tillema, Harm; Leenknecht, Martijn; Segers, Mien

    2011-01-01

    The interest in "assessment for learning" (AfL) has resulted in a search for new modes of assessment that are better aligned to students' learning how to learn. However, with the introduction of new assessment tools, also questions arose with respect to the quality of its measurement. On the one hand, the appropriateness of traditional,…

  12. NATIONAL CROP LOSS ASSESSMENT NETWORK: QUALITY ASSURANCE PROGRAM (JOURNAL VERSION)

    EPA Science Inventory

    A quality assurance program was incorporated into the National Crop Loss Assessment Network (NCLAN) program, designed to assess the economic impacts of gaseous air pollutants on major agricultural crops in the United States. The quality assurance program developed standardized re...

  13. Peer Review and Quality Assessment in Complete Denture Education.

    ERIC Educational Resources Information Center

    Novetsky, Marvin; Razzoog, Michael E.

    1981-01-01

    A program in peer review and quality assessment at the University of Michigan denture department is described. The program exposes students to peer review in order to assess the quality of their treatment. (Author/MLW)

  14. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup. PMID:26471786

  15. A device for automatic measurement of writhing and its application to the assessment of analgesic agents.

    PubMed

    Adachi, K

    1994-10-01

    A device was developed for automatically measuring writhing in mice so as to be applied to the assessment of analgesic agents. The device was composed of a specially designed container equipped with a detector, namely, a mechanoelectro transducer for writhing. The detector was made up of units of a string, two plates, and two strain gauges. In the unit, each end of the string was connected to either of the plates to which either of the strain gauges was attached. The change in tension of the string due to writhing was converted into the mechanical strain of the plates and then the resistance change of the strain gauges. The resistance change was amplified by a Wheatstone bridge circuit that was connected to a differential amplifier, a high-pass filter, comparator(s), and a monostable multivibrator to obtain the electrical signal for writhing. Using this device, writhing was continuously measured, and evaluation of various types of analgesic agents was performed. The result suggests that this device has sufficient accuracy both for the detection of writhing and the evaluation of analgesics. It has the advantage of automatic measurement of writhing in contrast to the conventional visual observation method. PMID:7865865

  16. Quantitative assessment of automatic reconstructions of branching systems obtained from laser scanning

    PubMed Central

    Boudon, Frédéric; Preuksakarn, Chakkrit; Ferraro, Pascal; Diener, Julien; Nacry, Philippe; Nikinmaa, Eero; Godin, Christophe

    2014-01-01

    Background and Aims Automatic acquisition of plant architecture is a major challenge for the construction of quantitative models of plant development. Recently, 3-D laser scanners have made it possible to acquire 3-D images representing a sampling of an object's surface. A number of specific methods have been proposed to reconstruct plausible branching structures from this new type of data, but critical questions remain regarding their suitability and accuracy before they can be fully exploited for use in biological applications. Methods In this paper, an evaluation framework to assess the accuracy of tree reconstructions is presented. The use of this framework is illustrated on a selection of laser scans of trees. Scanned data were manipulated by experienced researchers to produce reference tree reconstructions against which comparisons could be made. The evaluation framework is given two tree structures and compares both their elements and their topological organization. Similar elements are identified based on geometric criteria using an optimization algorithm. The organization of these elements is then compared and their similarity quantified. From these analyses, two indices of geometrical and structural similarities are defined, and the automatic reconstructions can thus be compared with the reference structures in order to assess their accuracy. Key Results The evaluation framework that was developed was successful at capturing the variation in similarities between two structures as different levels of noise were introduced. The framework was used to compare three different reconstruction methods taken from the literature, and allowed sensitive parameters of each one to be determined. The framework was also generalized for the evaluation of root reconstruction from 2-D images and demonstrated its sensitivity to higher architectural complexity of structure which was not detected with a global evaluation criterion. Conclusions The evaluation framework

  17. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning

    PubMed Central

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2016-01-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients (fv) to account for most of the spectrum energy (Σfv2). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth—the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The ‘leave-one-out’ cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves (R = 91%−96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  18. No training blind image quality assessment

    NASA Astrophysics Data System (ADS)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  19. Quality Assessment Dimensions of Distance Teaching/Learning Curriculum Designing

    ERIC Educational Resources Information Center

    Volungeviciene, Airina; Tereseviciene, Margarita

    2008-01-01

    The paper presents scientific literature analysis in the area of distance teaching/learning curriculum designing and quality assessment. The aim of the paper is to identify quality assessment dimensions of distance teaching/learning curriculum designing. The authors of the paper agree that quality assessment should be considered during the…

  20. Performance assessment of an RFID system for automatic surgical sponge detection in a surgery room.

    PubMed

    Dinis, H; Zamith, M; Mendes, P M

    2015-01-01

    A retained surgical instrument is a frequent incident in medical surgery rooms all around the world, despite being considered an avoidable mistake. Hence, an automatic detection solution of the retained surgical instrument is desirable. In this paper, the use of millimeter waves at the 60 GHz band for surgical material RFID purposes is evaluated. An experimental procedure to assess the suitability of this frequency range for short distance communications with multiple obstacles was performed. Furthermore, an antenna suitable to be incorporated in surgical materials, such as sponges, is presented. The antenna's operation characteristics are evaluated as to determine if it is adequate for the studied application over the given frequency range, and under different operating conditions, such as varying sponge water content. PMID:26736960

  1. Content-aware objective video quality assessment

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, Benhur; Niño-Castañeda, Jorge; Platiša, Ljiljana; Philips, Wilfried

    2016-01-01

    Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

  2. Image quality assessment in the low quality regime

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2012-03-01

    Traditionally, image quality estimators have been designed and optimized to operate over the entire quality range of images in a database, from very low quality to visually lossless. However, if quality estimation is limited to a smaller quality range, their performances drop dramatically, and many image applications only operate over such a smaller range. This paper is concerned with one such range, the low-quality regime, which is defined as the interval of perceived quality scores where there exists a linear relationship between the perceived quality scores and the perceived utility scores and exists at the low-quality end of image databases. Using this definition, this paper describes a subjective experiment to determine the low-quality regime for databases of distorted images that include perceived quality scores but not perceived utility scores, such as CSIQ and LIVE. The performances of several image utility and quality estimators are evaluated in the low-quality regime, indicating that utility estimators can be successfully applied to estimate perceived quality in this regime. Omission of the lowestfrequency image content is shown to be crucial to the performances of both kinds of estimators. Additionally, this paper establishes an upper-bound for the performances of quality estimators in the LQR, using a family of quality estimators based on VIF. The resulting optimal quality estimator indicates that estimating quality in the low-quality regime is robust to exact frequency pooling weights, and that near-optimal performance can be achieved by a variety of estimators providing that they substantially emphasize the appropriate frequency content.

  3. 2003 SNL ASCI applications software quality engineering assessment report.

    SciTech Connect

    Schofield, Joseph Richard, Jr.; Ellis, Molly A.; Williamson, Charles Michael; Bonano, Lora A.

    2004-02-01

    This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.

  4. X-ray absorptiometry of the breast using mammographic exposure factors: application to units featuring automatic beam quality selection.

    PubMed

    Kotre, C J

    2010-06-01

    A number of studies have identified the relationship between the visual appearance of high breast density at mammography and an increased risk of breast cancer. Approaches to quantify the amount of glandular tissue within the breast from mammography have so far concentrated on image-based methods. Here, it is proposed that the X-ray parameters automatically selected by the mammography unit can be used to estimate the thickness of glandular tissue overlying the automatic exposure sensor area, provided that the unit can be appropriately calibrated. This is a non-trivial task for modern mammography units that feature automatic beam quality selection, as the number of tube potential and X-ray target/filter combinations used to cover the range of breast sizes and compositions can be large, leading to a potentially unworkable number of curve fits and interpolations. Using appropriate models for the attenuation of the glandular breast in conjunction with a constrained set of physical phantom measurements, it is demonstrated that calibration for X-ray absorptiometry can be achieved despite the large number of possible exposure factor combinations employed by modern mammography units. The main source of error on the estimated glandular tissue thickness using this method is shown to be uncertainty in the measured compressed breast thickness. An additional correction for this source of error is investigated and applied. Initial surveys of glandular thickness for a cohort of women undergoing breast screening are presented. PMID:20505033

  5. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  6. Quality assessment of Landsat surface reflectance products using MODIS data

    NASA Astrophysics Data System (ADS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric F.; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  7. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    NASA Technical Reports Server (NTRS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  8. Validation of the automatic image analyser to assess retinal vessel calibre (ALTAIR): a prospective study protocol

    PubMed Central

    Garcia-Ortiz, Luis; Gómez-Marcos, Manuel A; Recio-Rodríguez, Jose I; Maderuelo-Fernández, Jose A; Chamoso-Santos, Pablo; Rodríguez-González, Sara; de Paz-Santana, Juan F; Merchan-Cifuentes, Miguel A; Corchado-Rodríguez, Juan M

    2014-01-01

    Introduction The fundus examination is a non-invasive evaluation of the microcirculation of the retina. The aim of the present study is to develop and validate (reliability and validity) the ALTAIR software platform (Automatic image analyser to assess retinal vessel calibre) in order to analyse its utility in different clinical environments. Methods and analysis A cross-sectional study in the first phase and a prospective observational study in the second with 4 years of follow-up. The study will be performed in a primary care centre and will include 386 participants. The main measurements will include carotid intima-media thickness, pulse wave velocity by Sphygmocor, cardio-ankle vascular index through the VASERA VS-1500, cardiac evaluation by a digital ECG and renal injury by microalbuminuria and glomerular filtration. The retinal vascular evaluation will be performed using a TOPCON TRCNW200 non-mydriatic retinal camera to obtain digital images of the retina, and the developed software (ALTAIR) will be used to automatically calculate the calibre of the retinal vessels, the vascularised area and the branching pattern. For software validation, the intraobserver and interobserver reliability, the concurrent validity of the vascular structure and function, as well as the association between the estimated retinal parameters and the evolution or onset of new lesions in the target organs or cardiovascular diseases will be examined. Ethics and dissemination The study has been approved by the clinical research ethics committee of the healthcare area of Salamanca. All study participants will sign an informed consent to agree to participate in the study in compliance with the Declaration of Helsinki and the WHO standards for observational studies. Validation of this tool will provide greater reliability to the analysis of retinal vessels by decreasing the intervention of the observer and will result in increased validity through the use of additional information, especially

  9. Fully automatic measuring system for assessing masticatory performance using β-carotene-containing gummy jelly.

    PubMed

    Nokubi, T; Yasui, S; Yoshimuta, Y; Kida, M; Kusunoki, C; Ono, T; Maeda, Y; Nokubi, F; Yokota, K; Yamamoto, T

    2013-02-01

    Despite the importance of masticatory performance in health promotion, assessment of masticatory performance has not been widely conducted to date because the methods are labour intensive. The purpose of this study is to investigate the accuracy of a novel system for automatically measuring masticatory performance that uses β-carotene-containing gummy jelly. To investigate the influence of rinsing time on comminuted jelly pieces expectorated from the oral cavity, divided jelly pieces were treated with two types of dye solution and then rinsed for various durations. Changes in photodiode (light receiver) voltages from light emitted through a solution of dissolved β-carotene from jelly pieces under each condition were compared with those of unstained jelly. To investigate the influence of dissolving time, changes in light receiver voltage resulting from an increase in division number were compared between three dissolving times. For all forms of divided test jelly and rinsing times, no significant differences in light receiver voltage were observed between any of the stain groups and the control group. Voltages decreased in a similar manner for all forms of divided jelly as dissolving time increased. The highest coefficient of determination (R(2)  = 0·979) between the obtained voltage and the increased surface area of each divided jelly was seen at the 10 s dissolving time. These results suggested that our fully automatic system can estimate the increased surface area of comminuted gummy jelly as a parameter of masticatory performance with high accuracy after rinsing and dissolving operations of 10 s each. PMID:22882741

  10. On the dependence of information display quality requirements upon human characteristics and pilot/automatics relations

    NASA Technical Reports Server (NTRS)

    Wilckens, V.

    1972-01-01

    Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.

  11. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    USGS Publications Warehouse

    Arnold, Terri L.; Desimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, Marylynn; Kingsbury, James A.; Belitz, Kenneth

    2016-01-01

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  12. Quality assessment of clinical computed tomography

    NASA Astrophysics Data System (ADS)

    Berndt, Dorothea; Luckow, Marlen; Lambrecht, J. Thomas; Beckmann, Felix; Müller, Bert

    2008-08-01

    Three-dimensional images are vital for the diagnosis in dentistry and cranio-maxillofacial surgery. Artifacts caused by highly absorbing components such as metallic implants, however, limit the value of the tomograms. The dominant artifacts observed are blowout and streaks. Investigating the artifacts generated by metallic implants in a pig jaw, the data acquisition for the patients in dentistry should be optimized in a quantitative manner. A freshly explanted pig jaw including related soft-tissues served as a model system. Images were recorded varying the accelerating voltage and the beam current. The comparison with multi-slice and micro computed tomography (CT) helps to validate the approach with the dental CT system (3D-Accuitomo, Morita, Japan). The data are rigidly registered to comparatively quantify their quality. The micro CT data provide a reasonable standard for quantitative data assessment of clinical CT.

  13. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  14. Set up of an automatic water quality sampling system in irrigation agriculture.

    PubMed

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2013-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  15. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  16. Examining the Importance of Assessing Rapid Automatized Naming (RAN) for the Identification of Children with Reading Difficulties

    ERIC Educational Resources Information Center

    Georgiou, George K.; Parrila, Rauno; Manolitsis, George; Kirby, John R.

    2011-01-01

    The purpose of this study was to assess the diagnostic value of rapid automatized naming (RAN) in the identification of poor readers in two alphabetic orthographies: English and Greek. Ninety-seven English-speaking Canadian (mean age = 66.70 months) and 70 Greek children (mean age = 67.60 months) were followed from Kindergarten until Grade 3. In…

  17. Assessing the Effects of Automatically Delivered Stimulation on the Use of Simple Exercise Tools by Students with Multiple Disabilities.

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop

    2003-01-01

    This study assessed the effects of automatically delivered stimulation on the activity level and mood of three students with multiple disabilities during their use of a stepper and a stationary bicycle. Stimuli from a pool of favorite stimulus events were delivered electronically while students were actively exercising. Findings indicated the…

  18. Comprehensive automatic assessment of retinal vascular abnormalities for computer-assisted retinopathy grading.

    PubMed

    Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time. PMID:25571442

  19. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  20. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a)...

  1. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a)...

  2. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a)...

  3. Video quality assessment for web content mirroring

    NASA Astrophysics Data System (ADS)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  4. Beef quality assessed at European research centres.

    PubMed

    Dransfield, E; Nute, G R; Roberts, T A; Boccard, R; Touraille, C; Buchter, L; Casteels, M; Cosentino, E; Hood, D E; Joseph, R L; Schon, I; Paardekooper, E J

    1984-01-01

    Loin steaks and cubes of M. semimembranosus from eight (12 month old) Galloway steers and eight (16-18 month old) Charolais cross steers raised in England and from which the meat was conditioned for 2 or 10 days, were assessed in research centres in Belgium, Denmark, England, France, the Federal Republic of Germany, Ireland, Italy and the Netherlands. Laboratory panels assessed meat by grilling the steaks and cooking the cubes in casseroles according to local custom using scales developed locally and by scales used frequently at other research centres. The meat was mostly of good quality but with sufficient variation to obtain meaningful comparisons. Tenderness and juiciness were assessed most, and flavour least, consistently. Over the 32 meats, acceptability of steaks and casseroles was in general compounded from tenderness, juiciness and flavour. However, when the meat was tough, it dominated the overall judgement; but when tender, flavour played an important rôle. Irish and English panels tended to weight more on flavour and Italian panels on tenderness and juiciness. Juciness and tenderness were well correlated among all panels except in Italy and Germany. With flavour, however, Belgian, Irish, German and Dutch panels ranked the meats similarly and formed a group distinct from the others which did not. The panels showed a similar grouping for judgements of acceptability. French and Belgian panels judged the steaks from the older Charolais cross steers to have more flavour and be more juicy than average and tended to prefer them. Casseroles from younger steers were invariably preferred although the French and Belgian panels judged aged meat from older animals equally acceptable. These regional biases were thought to be derived mainly from differences in cooking, but variations in experience and perception of assessors also contributed. PMID:22055992

  5. Assessing Negative Automatic Thoughts: Psychometric Properties of the Turkish Version of the Cognition Checklist

    PubMed Central

    Batmaz, Sedat; Ahmet Yuncu, Ozgur; Kocbiyik, Sibel

    2015-01-01

    Background: Beck’s theory of emotional disorder suggests that negative automatic thoughts (NATs) and the underlying schemata affect one’s way of interpreting situations and result in maladaptive coping strategies. Depending on their content and meaning, NATs are associated with specific emotions, and since they are usually quite brief, patients are often more aware of the emotion they feel. This relationship between cognition and emotion, therefore, is thought to form the background of the cognitive content specificity hypothesis. Researchers focusing on this hypothesis have suggested that instruments like the cognition checklist (CCL) might be an alternative to make a diagnostic distinction between depression and anxiety. Objectives: The aim of the present study was to assess the psychometric properties of the Turkish version of the CCL in a psychiatric outpatient sample. Patients and Methods: A total of 425 psychiatric outpatients 18 years of age and older were recruited. After a structured diagnostic interview, the participants completed the hospital anxiety depression scale (HADS), the automatic thoughts questionnaire (ATQ), and the CCL. An exploratory factor analysis was performed, followed by an oblique rotation. The internal consistency, test-retest reliability, and concurrent and discriminant validity analyses were undertaken. Results: The internal consistency of the CCL was excellent (Cronbach’s α = 0.95). The test-retest correlation coefficients were satisfactory (r = 0.80, P < 0.001 for CCL-D, and r = 0.79, P < 0.001 for CCL-A). The exploratory factor analysis revealed that a two-factor solution best fit the data. This bidimensional factor structure explained 51.27 % of the variance of the scale. The first factor consisted of items related to anxious cognitions, and the second factor of depressive cognitions. The CCL subscales significantly correlated with the ATQ (rs 0.44 for the CCL-D, and 0.32 for the CCL-A) as well as the other measures of

  6. QUALITY: A program to assess basis set quality

    NASA Astrophysics Data System (ADS)

    Sordo, J. A.

    1998-09-01

    A program to analyze in detail the quality of basis sets is presented. The information provided by the application of a wide variety of (atomic and/or molecular) quality criteria is processed by using a methodology that allows one to determine the most appropriate quality test to select a basis set to compute a given (atomic or molecular) property. Fuzzy set theory is used to choose the most adequate basis set to compute simultaneously a set of properties.

  7. The Impact of Quality Assessment in Universities: Portuguese Students' Perceptions

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Santiago, Rui; Sarrico, Claudia S.

    2012-01-01

    Despite being one of the major reasons for the development of quality assessment, students seem relatively unaware of its potential impact. Since one of the main purposes of assessment is to provide students with information on the quality of universities, this lack of awareness brings in to question the effectiveness of assessment as a device for…

  8. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  9. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  10. Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    PubMed Central

    Jochem, Andreas; Höfle, Bernhard; Rutzinger, Martin; Pfeifer, Norbert

    2009-01-01

    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2. PMID:22346695

  11. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  12. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  13. Federal Workforce Quality: Measurement and Improvement. Report of the Advisory Committee on Federal Workforce Quality Assessment.

    ERIC Educational Resources Information Center

    Office of Personnel Management, Washington, DC.

    The Advisory Committee on Federal Workforce Quality Assessment was chartered to examine various work force quality assessment efforts in the federal government and provide advice on their adequacy and suggestions on their improvement or expansion. Objective data in recent research suggested that a universal decline in work force quality might not…

  14. Quality assessment for spectral domain optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady, III; Markey, Mia K.; Milner, Thomas E.

    2009-02-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment parameters, signal to noise ratio (SNR) and signal strength (SS), are also calculated for each OCT image. The results of the objective assessment are compared with subjective assessment. In the subjective assessment, one OCT expert graded the image quality according to a three-level scale (good, fair, and poor). The OCT B-scan images of the retina from six subjects are evaluated by both objective and subjective assessment. From the comparison, we demonstrate that the objective assessment successfully differentiates between the acceptable quality images (good and fair images) and poor quality OCT images as graded by OCT experts. We evaluate the performance of the objective assessment under different quality assessment parameters and demonstrate that SD is the best at distinguishing between fair and good quality images. The accuracy of RNFL thickness estimation is improved significantly after poor quality OCT images are rejected by automated objective assessment using the SD, SNR, and SS.

  15. Assessing soil quality in organic agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality is directly linked to food production, food security, and environmental quality (i.e. water quality, global warming, and energy use in food production). Unfortunately, moderate to severe degeneration of soils (i.e., loss of soil biodiversity, poor soil tilth, and unbalanced elemental c...

  16. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    NASA Astrophysics Data System (ADS)

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-05-01

    Climate change has already a large impact on the availability of water resources. Many regions in South-East Asia are assumed to receive less water in the future, dramatically impacting the production of the most important staple food: rice (Oryza sativa L.). Rice is the primary food source for nearly half of the World's population, and is the only cereal that can grow under wetland conditions. Especially anaerobic (flooded) rice fields require high amounts of water but also have higher yields than aerobic produced rice. In the past different methods were developed to reduce the water use in rice paddies, like alternative wetting and drying or the use of mixed cropping systems with aerobic (non-flooded) rice and alternative crops such as maize. A more detailed understanding of water and nutrient cycling in rice-based cropping systems is needed to reduce water use, and requires the investigation of hydrological and biochemical processes as well as transport dynamics at the field scale. New developments in analytical devices permit monitoring parameters at high temporal resolutions and at acceptable costs without much necessary maintenance or analysis over longer periods. Here we present a new type of automatic sampling set-up that facilitates in situ analysis of hydrometric information, stable water isotopes and nitrate concentrations in spatially differentiated agricultural fields. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer for monitoring nitrate content and various water level sensors for hydrometric information. The whole system is maintained with special developed software for remote control of the system via internet. We

  17. Informatics: essential infrastructure for quality assessment and improvement in nursing.

    PubMed Central

    Henry, S B

    1995-01-01

    In recent decades there have been major advances in the creation and implementation of information technologies and in the development of measures of health care quality. The premise of this article is that informatics provides essential infrastructure for quality assessment and improvement in nursing. In this context, the term quality assessment and improvement comprises both short-term processes such as continuous quality improvement (CQI) and long-term outcomes management. This premise is supported by 1) presentation of a historical perspective on quality assessment and improvement; 2) delineation of the types of data required for quality assessment and improvement; and 3) description of the current and potential uses of information technology in the acquisition, storage, transformation, and presentation of quality data, information, and knowledge. PMID:7614118

  18. Stereoscopic image quality assessment using disparity-compensated view filtering

    NASA Astrophysics Data System (ADS)

    Song, Yang; Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2016-03-01

    Stereoscopic image quality assessment (IQA) plays a vital role in stereoscopic image/video processing systems. We propose a new quality assessment for stereoscopic image that uses disparity-compensated view filtering (DCVF). First, because a stereoscopic image is composed of different frequency components, DCVF is designed to decompose it into high-pass and low-pass components. Then, the qualities of different frequency components are acquired according to their phase congruency and coefficient distribution characteristics. Finally, support vector regression is utilized to establish a mapping model between the component qualities and subjective qualities, and stereoscopic image quality is calculated using this mapping model. Experiments on the LIVE 3-D IQA database and NBU 3-D IQA databases demonstrate that the proposed method can evaluate stereoscopic image quality accurately. Compared with several state-of-the-art quality assessment methods, the proposed method is more consistent with human perception.

  19. In Search of Quality Criteria in Peer Assessment Practices

    ERIC Educational Resources Information Center

    Ploegh, Karin; Tillema, Harm H.; Segers, Mien S. R.

    2009-01-01

    With the increasing popularity of peer assessment as an assessment tool, questions may arise about its measurement quality. Among such questions, the extent peer assessment practices adhere to standards of measurement. It has been claimed that new forms of assessment, require new criteria to judge their validity and reliability, since they aim for…

  20. Recent advances in soil quality assessment in the United States

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality is a concept that is useful as an educational and assessment tool. A number of assessment tools have been developed including: the Soil Conditioning Index (SCI), the Soil Management Assessment Framework (SMAF), the AgroEcosystem Performance Assessment Tool (AEPAT), and the new Cornell “...

  1. Research Quality Assessment in Education: Impossible Science, Possible Art?

    ERIC Educational Resources Information Center

    Bridges, David

    2009-01-01

    For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This article explores some of the problems and possibilities, with particular reference to…

  2. Educational Quality Assessment: Manual for Interpreting School Reports.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Bureau of Educational Quality Assessment.

    The results of the Pennsylvania Educational Quality Assessment program, Phase II, are interpreted. The first section of the manual presents a statement of each of the Ten Goals of Quality Education which served as the basis of the assessment. Also included are the key items on the questionnaires administered to 5th and 11th grade students. The…

  3. Quality Assessment of Internationalised Studies: Theory and Practice

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  4. Different Academics' Characteristics, Different Perceptions on Quality Assessment?

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Rosa, Maria Joao; Santos, Cristina S.

    2013-01-01

    Purpose: The purpose of this paper is to explore Portuguese academics' perceptions on higher education quality assessment objectives and purposes, in general, and on the recently implemented system for higher education quality assessment and accreditation, in particular. It aims to discuss the differences of those perceptions dependent on some…

  5. Higher Education Quality Assessment in China: An Impact Study

    ERIC Educational Resources Information Center

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  6. Development and Validation of Assessing Quality Teaching Rubrics

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  7. Academics' Perceptions on the Purposes of Quality Assessment

    ERIC Educational Resources Information Center

    Rosa, Maria J.; Sarrico, Claudia S.; Amaral, Alberto

    2012-01-01

    The accountability versus improvement debate is an old one. Although being traditionally considered dichotomous purposes of higher education quality assessment, some authors defend the need of balancing both in quality assessment systems. This article goes a step further and contends that not only they should be balanced but also that other…

  8. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    ERIC Educational Resources Information Center

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  9. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    ERIC Educational Resources Information Center

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  10. On Improving Higher Vocational College Education Quality Assessment

    NASA Astrophysics Data System (ADS)

    Wu, Xiang; Chen, Yan; Zhang, Jie; Wang, Yi

    Teaching quality assessment is a judgment process by using the theory and technology of education evaluation system to test whether the process and result of teaching have got to a certain quality level. Many vocational schools have established teaching quality assessment systems of their own characteristics as the basic means to do self-examination and teaching behavior adjustment. Combined with the characteristics and requirements of the vocational education and by analyzing the problems exist in contemporary vocational school, form the perspective of the content, assessment criteria and feedback system of the teaching quality assessment to optimize the system, to complete the teaching quality information net and offer suggestions for feedback channels, to make the institutionalization, standardization of the vocational schools and indeed to make contribution for the overall improvement of the quality of vocational schools.

  11. a Multi-Sensor Micro Uav Based Automatic Rapid Mapping System for Damage Assessment in Disaster Areas

    NASA Astrophysics Data System (ADS)

    Jeon, E.; Choi, K.; Lee, I.; Kim, H.

    2013-08-01

    Damage assessment is an important step toward the restoration of the severely affected areas due to natural disasters or accidents. For more accurate and rapid assessment, one should utilize geospatial data such as ortho-images acquired from the damaged areas. Change detection based on the geospatial data before and after the damage can make possible fast and automatic assessment with a reasonable accuracy. Accordingly, there have been significant demands on a rapid mapping system, which can provide the orthoimages of the damaged areas to the specialists and decision makers in disaster management agencies. In this study, we are developing a UAV based rapid mapping system that can acquire multi-sensory data in the air and generate ortho-images from the data on the ground in a rapid and automatic way. The proposed system consists of two main segments, aerial and ground segments. The aerial segment is to acquire sensory data through autonomous flight over the specified target area. It consists of a micro UAV platform, a mirror-less camera, a GPS, a MEMS IMU, and sensor integration and synchronization module. The ground segment is to receive and process the multi-sensory data to produce orthoimages in rapid and automatic ways. It consists of a computer with appropriate software for flight planning, data reception, georeferencing, and orthoimage generation. In the middle of this on-going project, we will introduce the overview of the project, describe the main components of each segment and provide intermediate results from preliminary test flights.

  12. Germination tests for assessing biochar quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Understanding the impact of biochar quality on soil productivity is crucial to the agronomic acceptance of biochar amendments. Our objective in this study was to develop a quick and reliable screening procedures to characterize the quality of biochar amendments. Biochars were evaluated by both seed ...

  13. SOIL QUALITY ASSESSMENT USING FUZZY MODELING

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maintaining soil productivity is essential if agriculture production systems are to be sustainable, thus soil quality is an essential issue. However, there is a paucity of tools for measurement for the purpose of understanding changes in soil quality. Here the possibility of using fuzzy modeling t...

  14. Comparison of water-quality samples collected by siphon samplers and automatic samplers in Wisconsin

    USGS Publications Warehouse

    Graczyk, David J.; Robertson, Dale M.; Rose, William J.; Steur, Jeffrey J.

    2000-01-01

    In small streams, flow and water-quality concentrations often change quickly in response to meteorological events. Hydrologists, field technicians, or locally hired stream ob- servers involved in water-data collection are often unable to reach streams quickly enough to observe or measure these rapid changes. Therefore, in hydrologic studies designed to describe changes in water quality, a combination of manual and automated sampling methods have commonly been used manual methods when flow is relatively stable and automated methods when flow is rapidly changing. Auto- mated sampling, which makes use of equipment programmed to collect samples in response to changes in stage and flow of a stream, has been shown to be an effective method of sampling to describe the rapid changes in water quality (Graczyk and others, 1993). Because of the high cost of automated sampling, however, especially for studies examining a large number of sites, alternative methods have been considered for collecting samples during rapidly changing stream conditions. One such method employs the siphon sampler (fig. 1). also referred to as the "single-stage sampler." Siphon samplers are inexpensive to build (about $25- $50 per sampler), operate, and maintain, so they are cost effective to use at a large number of sites. Their ability to collect samples representing the average quality of water passing though the entire cross section of a stream, however, has not been fully demonstrated for many types of stream sites.

  15. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  16. Assessing the Quality of MT Systems for Hindi to English Translation

    NASA Astrophysics Data System (ADS)

    Kalyani, Aditi; Kumud, Hemant; Pal Singh, Shashi; Kumar, Ajai

    2014-03-01

    Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given.

  17. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  18. Visual air quality assessment: Denver case study

    NASA Astrophysics Data System (ADS)

    Mumpower, Jeryl; Middleton, Paulette; Dennis, Robin L.; Stewart, Thomas R.; Veirs, Val

    Studies of visual air quality in the Denver metropolitan region during summer 1979 and winter 1979-1980 are described and results reported. The major objective of the studies was to investigate relationships among four types of variables important to urban visual air quality: (1) individuals' judgements of overall visual air quality; (2) perceptual cues used in making judgments of visual air quality; (3) measurable physical characteristics of the visual environment and (4) concentrations of visibility-reducing pollutants and their precursors. During August 1979 and mid-December 1979 to January 1980, simultaneous measurements of observational and environmental data were made daily at various locations throughout the metropolitan area. Observational data included ratings of overall air quality and related perceptual cues (e.g., distance, clarity, color, border) by multiple observers. Environmental data included routine hourly pollutant and meteorological measurements from several fixed locations within the city, as well as aerosol light scattering and absorption measures from one location. Statistical analyses indicated that (1) multiple perceptual cues are required to explain variation in judgments of overall visual air quality and (2) routine measurements of the physical environment appear to be inadequate predictors of either judgments of overall visual air quality or related perceptual cues.

  19. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. PMID:27313114

  20. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    SciTech Connect

    Li, X; Li, H; Wu, Y; Mutic, S; Yang, D

    2014-06-01

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System.

  1. WATER QUALITY ASSESSMENT OF AMERICAN FALLS RESERVOIR

    EPA Science Inventory

    A water quality model was developed to support a TMDL for phosphorus related to phytoplankton growth in the reservoir. This report documents the conceptual model, available data, model evaluation, and simulation results.

  2. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  3. Automatic assessment of volume asymmetries applied to hip abductor muscles in patients with hip arthroplasty

    NASA Astrophysics Data System (ADS)

    Klemt, Christian; Modat, Marc; Pichat, Jonas; Cardoso, M. J.; Henckel, Joahnn; Hart, Alister; Ourselin, Sebastien

    2015-03-01

    Metal-on-metal (MoM) hip arthroplasties have been utilised over the last 15 years to restore hip function for 1.5 million patients worldwide. Althoug widely used, this hip arthroplasty releases metal wear debris which lead to muscle atrophy. The degree of muscle wastage differs across patients ranging from mild to severe. The longterm outcomes for patients with MoM hip arthroplasty are reduced for increasing degrees of muscle atrophy, highlighting the need to automatically segment pathological muscles. The automated segmentation of pathological soft tissues is challenging as these lack distinct boundaries and morphologically differ across subjects. As a result, there is no method reported in the literature which has been successfully applied to automatically segment pathological muscles. We propose the first automated framework to delineate severely atrophied muscles by applying a novel automated segmentation propagation framework to patients with MoM hip arthroplasty. The proposed algorithm was used to automatically quantify muscle wastage in these patients.

  4. Teacher Quality and Quality Teaching: Examining the Relationship of a Teacher Assessment to Practice

    ERIC Educational Resources Information Center

    Hill, Heather C.; Umland, Kristin; Litke, Erica; Kapitula, Laura R.

    2012-01-01

    Multiple-choice assessments are frequently used for gauging teacher quality. However, research seldom examines whether results from such assessments generalize to practice. To illuminate this issue, we compare teacher performance on a mathematics assessment, during mathematics instruction, and by student performance on a state assessment. Poor…

  5. Mapping coal quality parameters for economic assessments

    SciTech Connect

    Hohn, M.E.; Smith, C.J.; Ashton, K.C.; McColloch, G.H. Jr.

    1988-08-01

    This study recommends mapping procedures for a data base of coal quality parameters. The West Virginia Geological and Economic Survey has developed a data base that includes about 10,000 analyses of coal samples representing most seams in West Virginia. Coverage is irregular and widely spaced; minimal sample spacing is generally greater than 1 mi. Geologists use this data base to answer public and industry requests for maps that show areas meeting coal quality specifications.

  6. Space shuttle flying qualities and criteria assessment

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.

    1987-01-01

    Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.

  7. Key Elements for Judging the Quality of a Risk Assessment

    PubMed Central

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  8. Physical and Chemical Water-Quality Data from Automatic Profiling Systems, Boulder Basin, Lake Mead, Arizona and Nevada, Water Years 2001-04

    USGS Publications Warehouse

    Rowland, Ryan C.; Westenburg, Craig L.; Veley, Ronald J.; Nylund, Walter E.

    2006-01-01

    Water-quality profile data were collected in Las Vegas Bay and near Sentinel Island in Lake Mead, Arizona and Nevada, from October 2000 to September 2004. The majority of the profiles were completed with automatic variable-buoyancy systems equipped with multiparameter water-quality sondes. Profile data near Sentinel Island were collected in August 2004 with an automatic variable-depth-winch system also equipped with a multiparameter water-quality sonde. Physical and chemical water properties collected and recorded by the profiling systems, including depth, water temperature, specific conductance, pH, dissolved-oxygen concentration, and turbidity are listed in tables and selected water-quality profile data are shown in graphs.

  9. a Photogrammetric Appraoch for Automatic Traffic Assessment Using Conventional Cctv Camera

    NASA Astrophysics Data System (ADS)

    Zarrinpanjeh, N.; Dadrassjavan, F.; Fattahi, H.

    2015-12-01

    One of the most practical tools for urban traffic monitoring is CCTV imaging which is widely used for traffic map generation and updating through human surveillance. But due to the expansion of urban road network and the use of huge number of CCTV cameras, visual inspection and updating of traffic sometimes seems to be ineffective and time consuming and therefore not providing real-time robust update. In this paper a method for vehicle detection accounting and speed estimation is proposed to give a more automated solution for traffic assessment. Through removing violating objects and detection of vehicles via morphological filtering and also classification of moving objects at the scene vehicles are counted and traffic speed is estimated. The proposed method is developed and tested using two datasets and evaluation values are computed. The results show that the successfulness of the algorithm decreases by about 12 % due to decrease in illumination quality of imagery.

  10. Application of case classification in healthcare quality assessment in China.

    PubMed

    Xu, Ping; Li, Meina; Zhang, Lulu; Sun, Qingwen; Lv, Shinwei; Lian, Bin; Wei, Min; Kan, Zhang

    2012-01-01

    The purpose of this study was to build a healthcare quality assessment system with disease category as the basic unit of assessment based on the principles of case classification, and to assess the quality of care in a large hospital in Shanghai. Using the Delphi method, four quality indicators were selected. The data of 124,125 patients discharged from a large general hospital in Shanghai, from October 1, 2004 to September 30, 2007, were used to establish quality indicators estimates for each disease. The data of 51,760 discharged patients from October 1, 2007 to September 30, 2008 were used as the testing sample, and the standard scores of each quality indicator for each clinical department were calculated. Then the total score of various clinical departments in the hospital was calculated based on the differences between the practical scores and the standard. Based on quality assessment scores, we found that the quality of healthcare in departments of thyroid and mammary gland surgery, obstetrics and gynaecology, stomatology, dermatology, and paediatrics was better than in other departments. Implementation of the case classification for healthcare quality assessment permitted the comparison of quality among different healthcare departments. PMID:22700559

  11. An Approach to Automatic Detection and Hazard Risk Assessment of Large Protruding Rocks in Densely Forested Hilly Region

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Kawamura, K.; Manno, K.; Satoh, T.; Tachibana, K.

    2016-06-01

    Rock-fall along highways or railways presents one of the major threats to transportation and human safety. So far, the only feasible way to detect the locations of such protruding rocks located in the densely forested hilly region is by physically visiting the site and assessing the situation. Highways or railways are stretched to hundreds of kilometres; hence, this traditional approach of determining rock-fall risk zones is not practical to assess the safety throughout the highways or railways. In this research, we have utilized a state-of-the-art airborne LiDAR technology and derived a workflow to automatically detect protruding rocks in densely forested hilly regions and analysed the level of hazard risks they pose. Moreover, we also performed a 3D dynamic simulation of rock-fall to envisage the event. We validated that our proposed technique could automatically detect most of the large protruding rocks in the densely forested hilly region. Automatic extraction of protruding rocks and proper risk zoning could be used to identify the most crucial place that needs the proper protection measures. Hence, the proposed technique would provide an invaluable support for the management and planning of highways and railways safety, especially in the forested hilly region.

  12. Introducing mapping standards in the quality assessment of buildings extracted from very high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Freire, S.; Santos, T.; Navarro, A.; Soares, F.; Silva, J. D.; Afonso, N.; Fonseca, A.; Tenedório, J.

    2014-04-01

    Many municipal activities require updated large-scale maps that include both topographic and thematic information. For this purpose, the efficient use of very high spatial resolution (VHR) satellite imagery suggests the development of approaches that enable a timely discrimination, counting and delineation of urban elements according to legal technical specifications and quality standards. Therefore, the nature of this data source and expanding range of applications calls for objective methods and quantitative metrics to assess the quality of the extracted information which go beyond traditional thematic accuracy alone. The present work concerns the development and testing of a new approach for using technical mapping standards in the quality assessment of buildings automatically extracted from VHR satellite imagery. Feature extraction software was employed to map buildings present in a pansharpened QuickBird image of Lisbon. Quality assessment was exhaustive and involved comparisons of extracted features against a reference data set, introducing cartographic constraints from scales 1:1000, 1:5000, and 1:10,000. The spatial data quality elements subject to evaluation were: thematic (attribute) accuracy, completeness, and geometric quality assessed based on planimetric deviation from the reference map. Tests were developed and metrics analyzed considering thresholds and standards for the large mapping scales most frequently used by municipalities. Results show that values for completeness varied with mapping scales and were only slightly superior for scale 1:10,000. Concerning the geometric quality, a large percentage of extracted features met the strict topographic standards of planimetric deviation for scale 1:10,000, while no buildings were compliant with the specification for scale 1:1000.

  13. Illinois Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Illinois' Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  14. Iowa Child Care Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Iowa's Child Care Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile is divided into the following categories: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family Child Care Programs;…

  15. Missouri Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Missouri's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  16. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  17. Oregon Child Care Quality Indicators Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Oregon's Child Care Quality Indicators Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  18. Mississippi Quality Step System: QRS Profile. The Child Care Quality Rating System (QRS)Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Mississippi's Quality Step System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Application…

  19. Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  20. New Hampshire Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of New Hampshire's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  1. Ohio Step Up to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Ohio's Step Up to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  2. Indiana Paths to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Indiana's Paths to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  3. Miami-Dade Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Miami-Dade's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  4. Maine Quality for ME: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Maine's Quality for ME prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  5. Doctors or technicians: assessing quality of medical education

    PubMed Central

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products. PMID:23745059

  6. Factors Influencing Assessment Quality in Higher Vocational Education

    ERIC Educational Resources Information Center

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  7. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  8. Transition Assessment: Wise Practices for Quality Lives.

    ERIC Educational Resources Information Center

    Sax, Caren L.; Thoma, Colleen A.

    The 10 papers in this book attempt to provide some creative approaches to assessment of individuals with disabilities as they transition from the school experience to the adult world. The papers are: (1) "For Whom the Test Is Scored: Assessments, the School Experience, and More" (Douglas Fisher and Caren L. Sax); (2) "Person-Centered Planning:…

  9. Image quality assessment using Takagi-Sugeno-Kang fuzzy model

    NASA Astrophysics Data System (ADS)

    Äńordević, Dragana; Kukolj, Dragan; Schelkens, Peter

    2015-03-01

    The main aim of the paper is to present a non-linear image quality assessment model based on a fuzzy logic estimator, namely the Takagi-Sugeno-Kang fuzzy model. This image quality assessment model uses a clustered space of input objective metrics. Main advantages of the introduced quality model are simplicity and understandably of its fuzzy rules. As reference model the polynomial 3 rd order model was chosen. The parameters of the Takagi-Sugeno-Kang fuzzy model are optimized in accordance to the mapping criteria of the selected set of input objective quality measures to the Mean Opinion Score (MOS) scale.

  10. AN ASSESSMENT OF AUTOMATIC SEWER FLOW SAMPLERS (EPA/600/2-75/065)

    EPA Science Inventory

    A brief review of the characteristics of storm and combined sewer flows is given followed by a general discussion of the purposes for and requirements of a sampling program. The desirable characteristics of automatic sampling equipment are set forth and problem areas are outlined...

  11. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  12. E-Services quality assessment framework for collaborative networks

    NASA Astrophysics Data System (ADS)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  13. Objective image quality assessment based on support vector regression.

    PubMed

    Narwaria, Manish; Lin, Weisi

    2010-03-01

    Objective image quality estimation is useful in many visual processing systems, and is difficult to perform in line with the human perception. The challenge lies in formulating effective features and fusing them into a single number to predict the quality score. In this brief, we propose a new approach to address the problem, with the use of singular vectors out of singular value decomposition (SVD) as features for quantifying major structural information in images and then support vector regression (SVR) for automatic prediction of image quality. The feature selection with singular vectors is novel and general for gauging structural changes in images as a good representative of visual quality variations. The use of SVR exploits the advantages of machine learning with the ability to learn complex data patterns for an effective and generalized mapping of features into a desired score, in contrast with the oft-utilized feature pooling process in the existing image quality estimators; this is to overcome the difficulty of model parameter determination for such a system to emulate the related, complex human visual system (HVS) characteristics. Experiments conducted with three independent databases confirm the effectiveness of the proposed system in predicting image quality with better alignment with the HVS's perception than the relevant existing work. The tests with untrained distortions and databases further demonstrate the robustness of the system and the importance of the feature selection. PMID:20100674

  14. SAMPLING DESIGN FOR ASSESSING RECREATIONAL WATER QUALITY

    EPA Science Inventory

    Current U.S. EPA guidelines for monitoring recreatoinal water quality refer to the geometric mean density of indicator organisms, enterococci and E. coli in marine and fresh water, respectively, from at least five samples collected over a four-week period. In order to expand thi...

  15. Assessment of Curriculum Quality through Alumni Research

    ERIC Educational Resources Information Center

    Saunders-Smits, Gillian; de Graaff, Erik

    2012-01-01

    In today's output defined society, alumni are the output of higher education. This article shows how alumni research can be used as an important indicator of curriculum quality. This relatively unexplored area of engineering education research in Europe is highlighted using a case study carried out in the Netherlands, the outcomes of which have…

  16. ASSESSING WATER QUALITY: AN ENERGETICS PERPECTIVE

    EPA Science Inventory

    Integrated measures of food web dynamics could serve as important supplemental indicators of water quality that are well related with ecological integrity and environmental well-being. When the concern is a well-characterized pollutant (posing an established risk to human health...

  17. Assessing Students' Spiritual and Religious Qualities

    ERIC Educational Resources Information Center

    Astin, Alexander W.; Astin, Helen S.; Lindholm, Jennifer A.

    2011-01-01

    This paper describes a comprehensive set of 12 new measures for studying undergraduate students' spiritual and religious development. The three measures of spirituality, four measures of "spiritually related" qualities, and five measures of religiousness demonstrate satisfactory reliability, robustness, and both concurrent and predictive validity.…

  18. Automatic milking systems in the Protected Designation of Origin Montasio cheese production chain: effects on milk and cheese quality.

    PubMed

    Innocente, N; Biasutti, M

    2013-02-01

    Montasio cheese is a typical Italian semi-hard, semi-cooked cheese produced in northeastern Italy from unpasteurized (raw or thermised) cow milk. The Protected Designation of Origin label regulations for Montasio cheese require that local milk be used from twice-daily milking. The number of farms milking with automatic milking systems (AMS) has increased rapidly in the last few years in the Montasio production area. The objective of this study was to evaluate the effects of a variation in milking frequency, associated with the adoption of an automatic milking system, on milk quality and on the specific characteristics of Montasio cheese. Fourteen farms were chosen, all located in the Montasio production area, with an average herd size of 60 (Simmental, Holstein-Friesian, and Brown Swiss breeds). In 7 experimental farms, the cows were milked 3 times per day with an AMS, whereas in the other 7 control farms, cows were milked twice daily in conventional milking parlors (CMP). The study showed that the main components, the hygienic quality, and the cheese-making features of milk were not affected by the milking system adopted. In fact, the control and experimental milks did not reveal a statistically significant difference in fat, protein, and lactose contents; in the casein index; or in the HPLC profiles of casein and whey protein fractions. Milk from farms that used an AMS always showed somatic cell counts and total bacterial counts below the legal limits imposed by European Union regulations for raw milk. Finally, bulk milk clotting characteristics (clotting time, curd firmness, and time to curd firmness of 20mm) did not differ between milk from AMS and milk from CMP. Montasio cheese was made from milk collected from the 2 groups of farms milking either with AMS or with CMP. Three different cheese-making trials were performed during the year at different times. As expected, considering the results of the milk analysis, the moisture, fat, and protein contents of the

  19. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  20. Assessing diversity and quality in primary care through the multimethod assessment process (MAP).

    PubMed

    Kairys, Jo Ann; Orzano, John; Gregory, Patrice; Stroebel, Christine; DiCicco-Bloom, Barbara; Roemheld-Hamm, Beatrix; Kobylarz, Fred A; Scott, John G; Coppola, Lisa; Crabtree, Benjamin F

    2002-01-01

    The U.S. health care system serves a diverse population, often resulting in significant disparities in delivery and quality of care. Nevertheless, most quality improvement efforts fail to systematically assess diversity and associated disparities. This article describes application of the multimethod assessment process (MAP) for understanding disparities in relation to diversity, cultural competence, and quality improvement in clinical practice. MAP is an innovative quality improvement methodology that integrates quantitative and qualitative techniques and produces a system level understanding of organizations to guide quality improvement interventions. A demonstration project in a primary care practice illustrates the utility of MAP for assessing diversity. PMID:12938252

  1. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  2. Online aptitude automatic surface quality inspection system for hot rolled strips steel

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan

    2005-12-01

    Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.

  3. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  4. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  5. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  6. UPPER SNAKE RIVER BASIN WATER QUALITY ASSESSMENT, 1976

    EPA Science Inventory

    This package contains information for the Upper Snake River Basin, Idaho (170402, 17040104). The report contains a water quality assessment approach which will assist EPA planners, land agencies, and state and local agencies in identifying probably nonpoint sources and determini...

  7. ASSESSING BIOACCUMULATION FOR DERIVING NATIONAL HUMAN HEALTH WATER QUALITY CRITERIA

    EPA Science Inventory

    The United States Environmental Protection Agency is revising its methodology for deriving national ambient water quality criteria (AWQC) to protect human health. A component of this guidance involves assessing the potential for chemical bioaccumulation in commonly consumed fish ...

  8. Data sources for environmental assessment: determining availability, quality and utility

    EPA Science Inventory

    Objectives: An environmental quality index (EQI) for all counties in the United States is being developed to explore the relationship between environmental insults and human health. The EQI will be particularly useful to assess how environmental disamenities contribute to health...

  9. US Department of Energy Quality Assessment Program data evaluation report

    SciTech Connect

    Jaquish, R.E.; Kinnison, R.R.

    1984-04-01

    The results of radiochemical analysis performed on the Quality Assessment Program (QAP) test samples are presented. This report reviews the results submitted by 26 participating laboratories for 49 different radionuclide-media combinations. 5 tables. (ACR)

  10. Assessing Quality across Health Care Subsystems in Mexico

    PubMed Central

    Puig, Andrea; Pagán, José A.; Wong, Rebeca

    2012-01-01

    Recent healthcare reform efforts in Mexico have focused on the need to improve the efficiency and equity of a fragmented healthcare system. In light of these reform initiatives, there is a need to assess whether healthcare subsystems are effective at providing high-quality healthcare to all Mexicans. Nationally representative household survey data from the 2006 Encuesta Nacional de Salud y Nutrición (National Health and Nutrition Survey) were used to assess perceived healthcare quality across different subsystems. Using a sample of 7234 survey respondents, we found evidence of substantial heterogeneity in healthcare quality assessments across healthcare subsystems favoring private providers over social security institutions. These differences across subsystems remained even after adjusting for socioeconomic, demographic, and health factors. Our analysis suggests that improvements in efficiency and equity can be achieved by assessing the factors that contribute to heterogeneity in quality across subsystems. PMID:19305224

  11. Using big data for quality assessment in oncology.

    PubMed

    Broughman, James R; Chen, Ronald C

    2016-05-01

    There is increasing attention in the US healthcare system on the delivery of high-quality care, an issue central to oncology. In the report 'Crossing the Quality Chasm', the Institute of Medicine identified six aims for improving healthcare quality: safe, effective, patient-centered, timely, efficient and equitable. This article describes how current big data resources can be used to assess these six dimensions, and provides examples of published studies in oncology. Strengths and limitations of current big data resources for the evaluation of quality of care are also discussed. Finally, this article outlines a vision where big data can be used not only to retrospectively assess the quality of oncologic care, but help physicians deliver high-quality care in real time. PMID:27090300

  12. Method and apparatus for assessing weld quality

    DOEpatents

    Smartt, Herschel B.; Kenney, Kevin L.; Johnson, John A.; Carlson, Nancy M.; Clark, Denis E.; Taylor, Paul L.; Reutzel, Edward W.

    2001-01-01

    Apparatus for determining a quality of a weld produced by a welding device according to the present invention includes a sensor operatively associated with the welding device. The sensor is responsive to at least one welding process parameter during a welding process and produces a welding process parameter signal that relates to the at least one welding process parameter. A computer connected to the sensor is responsive to the welding process parameter signal produced by the sensor. A user interface operatively associated with the computer allows a user to select a desired welding process. The computer processes the welding process parameter signal produced by the sensor in accordance with one of a constant voltage algorithm, a short duration weld algorithm or a pulsed current analysis module depending on the desired welding process selected by the user. The computer produces output data indicative of the quality of the weld.

  13. Assessment of Groundwater Quality by Chemometrics.

    PubMed

    Papaioannou, Agelos; Rigas, George; Kella, Sotiria; Lokkas, Filotheos; Dinouli, Dimitra; Papakonstantinou, Argiris; Spiliotis, Xenofon; Plageras, Panagiotis

    2016-07-01

    Chemometric methods were used to analyze large data sets of groundwater quality from 18 wells supplying the central drinking water system of Larissa city (Greece) during the period 2001 to 2007 (8.064 observations) to determine temporal and spatial variations in groundwater quality and to identify pollution sources. Cluster analysis grouped each year into three temporal periods (January-April (first), May-August (second) and September-December (third). Furthermore, spatial cluster analysis was conducted for each period and for all samples, and grouped the 28 monitoring Units HJI (HJI=represent the observations of the monitoring site H, the J-year and the period I) into three groups (A, B and C). Discriminant Analysis used only 16 from the 24 parameters to correctly assign 97.3% of the cases. In addition, Factor Analysis identified 7, 9 and 8 latent factors for groups A, B and C, respectively. PMID:27329059

  14. [Research on Assessment Methods of Spectrum Data Quality of Core Scan].

    PubMed

    Xiu, Lian-cun; Zheng, Zhi-zhong; Yin, Liang; Chen, Chun-xia; Yu, Zheng-kui; Huang, Bin; Zhang, Qiu-ning; Xiu, Xiao-xu; Gao, Yang

    2015-08-01

    Core scan is the instrument used for core spectrum and pictures detection that has been developed in recent years. Cores' data can be digitized with this equipment, then automatic core catalog can be achieved, which provides basis for geological research, mineral deposit study and peripheral deposit prospecting. Meanwhile, an online database of cores can be established by the means of core digitalization to solve the cost problem caused by core preservation and share resources. Quality of core data measurement directly affects the mineral identification, reliability of parameter inversion results. Therefore it's very important to quasi-manage the assessment of data quality with the instrument before cores' spectrum testing services. Combined with the independent R&D of CSD350A type core scan, and on the basis of spectroscopy basic theory, spectrum analysis methods, core spectrum analysis requirements, key issues such as data quality assessment methods, evaluation criteria and target parameters has been discussed in depth, and comprehensive assessment of independent R&D of core scan has been conducted, which indicates the reliability and validity of spectrum measurements of the instrument. Experimental tests show that the methods including test parameters and items can perfectly response the measurements of reflectance spectrum, wavelength accuracy, repeatability and signal to noise ratio with the instrument. Thus the quality of core scan data can be evaluated correctly, and the foundation of data quality for commercial services can be provided. In the case of the current lack of relevant assessment criteria, the method this study proposes for the assessment has great value in the work of core spectrum measurements. PMID:26672324

  15. Automatic method of analysis of OCT images in the assessment of the tooth enamel surface after orthodontic treatment with fixed braces

    PubMed Central

    2014-01-01

    Introduction Fixed orthodontic appliances, despite years of research and development, still raise a lot of controversy because of its potentially destructive influence on enamel. Therefore, it is necessary to quantitatively assess the condition and therein the thickness of tooth enamel in order to select the appropriate orthodontic bonding and debonding methodology as well as to assess the quality of enamel after treatment and clean-up procedure in order to choose the most advantageous course of treatment. One of the assessment methods is computed tomography where the measurement of enamel thickness and the 3D reconstruction of image sequences can be performed fully automatically. Material and method OCT images of 180 teeth were obtained from the Topcon 3D OCT-2000 camera. The images were obtained in vitro by performing sequentially 7 stages of treatment on all the teeth: before any interference into enamel, polishing with orthodontic paste, etching and application of a bonding system, orthodontic bracket bonding, orthodontic bracket removal, cleaning off adhesive residue. A dedicated method for the analysis and processing of images involving median filtering, mathematical morphology, binarization, polynomial approximation and the active contour method has been proposed. Results The obtained results enable automatic measurement of tooth enamel thickness in 5 seconds using the Core i5 CPU M460 @ 2.5GHz 4GB RAM. For one patient, the proposed method of analysis confirms enamel thickness loss of 80 μm (from 730 ± 165 μm to 650 ± 129 μm) after polishing with paste, enamel thickness loss of 435 μm (from 730 ± 165 μm to 295 ± 55 μm) after etching and bonding resin application, growth of a layer having a thickness of 265 μm (from 295 ± 55 μm to 560 ± 98 μm after etching) which is the adhesive system. After removing an orthodontic bracket, the adhesive residue was 105 μm and after cleaning it off, the enamel thickness was

  16. QRS detection based ECG quality assessment.

    PubMed

    Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter

    2012-09-01

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available. PMID:22902864

  17. Quality evaluation of extra high quality images based on key assessment word

    NASA Astrophysics Data System (ADS)

    Kameda, Masashi; Hayashi, Hidehiko; Akamatsu, Shigeru; Miyahara, Makoto M.

    2001-06-01

    An all encompassing goal of our research is to develop an extra high quality imaging system which is able to convey a high level artistic impression faithfully. We have defined a high order sensation as such a high level artistic impression, and it is supposed that the high order sensation is expressed by the combination of the psychological factor which can be described by plural assessment words. In order to pursue the quality factors that are important for the reproduction of the high order sensation, we have focused on the image quality evaluation of the extra high quality images using the assessment words considering the high order sensation. In this paper, we have obtained the hierarchical structure between the collected assessment words and the principles of European painting based on the conveyance model of the high order sensation, and we have determined a key assessment word 'plasticity' which is able to evaluate the reproduction of the high order sensation more accurately. The results of the subjective assessment experiments using the prototype of the developed extra high quality imaging system have shown that the obtained key assessment word 'plasticity' is the most appropriate assessment word to evaluate the image quality of the extra high quality images quasi-quantitatively.

  18. River Pollution: Part II. Biological Methods for Assessing Water Quality.

    ERIC Educational Resources Information Center

    Openshaw, Peter

    1984-01-01

    Discusses methods used in the biological assessment of river quality and such indicators of clean and polluted waters as the Trent Biotic Index, Chandler Score System, and species diversity indexes. Includes a summary of a river classification scheme based on quality criteria related to water use. (JN)

  19. Quality of Religious Education in Croatia Assessed from Teachers' Perspective

    ERIC Educational Resources Information Center

    Baric, Denis; Burušic, Josip

    2015-01-01

    The main aim of the present study was to examine the quality of religious education in Croatian primary schools when assessed from teachers' perspective. Religious education teachers (N?=?226) rated the impact of certain factors on the existing quality of religious education in primary schools and expressed their expectations about the future…

  20. Goals of Peer Assessment and Their Associated Quality Concepts

    ERIC Educational Resources Information Center

    Gielen, Sarah; Dochy, Filip; Onghena, Patrick; Struyven, Katrien; Smeets, Stijn

    2011-01-01

    The output of peer assessment in higher education has been investigated increasingly in recent decades. However, this output is evaluated against a variety of quality criteria, resulting in a cluttered picture. This article analyses the different conceptualisations of quality that appear in the literature. Discussions about the most appropriate…

  1. Quality of Education, Comparability, and Assessment Choice in Developing Countries

    ERIC Educational Resources Information Center

    Wagner, Daniel A.

    2010-01-01

    Over the past decade, international development agencies have begun to emphasize the improvement of the quality (rather than simply quantity) of education in developing countries. This new focus has been paralleled by a significant increase in the use of educational assessments as a way to measure gains and losses in quality. As this interest in…

  2. School Information: Phase III of Quality Assessment Program. Appendix B.

    ERIC Educational Resources Information Center

    Burson, William W.

    This questionnaire, used in the Educational Quality Assessment Program in Pennsylvania, was designed to be filled out by school administrators. It requests information about staff size, enrollment size, library books available, hours of paraprofessionals, and quality of housing in school district. It also includes a checklist to show the extent of…

  3. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    EPA Science Inventory

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  4. Assessment of density in enriched colony cages: Egg quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enriched colony cage production systems are becoming more prevalent in the US. A study was undertaken to determine the impact of housing density on hen health, well-being, egg production and quality. Six densities were examined with 8 housing replicates per density. Egg quality was assessed at hen a...

  5. Assessment of Severe Apnoea through Voice Analysis, Automatic Speech, and Speaker Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.

    2009-12-01

    This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

  6. Technical assessment for quality control of resins

    NASA Technical Reports Server (NTRS)

    Gosnell, R. B.

    1977-01-01

    Survey visits to companies involved in the manufacture and use of graphite-epoxy prepregs were conducted to assess the factors which may contribute to variability in the mechanical properties of graphite-epoxy composites. In particular, the purpose was to assess the contributions of the epoxy resins to variability. Companies represented three segments of the composites industry - aircraft manufacturers, prepreg manufacturers, and epoxy resin manufacturers. Several important sources of performance variability were identified from among the complete spectrum of potential sources which ranged from raw materials to composite test data interpretation.

  7. Random Forest for automatic assessment of heart failure severity in a telemonitoring scenario.

    PubMed

    Guidi, G; Pettenati, M C; Miniati, R; Iadanza, E

    2013-01-01

    In this study, we describe an automatic classifier of patients with Heart Failure designed for a telemonitoring scenario, improving the results obtained in our previous works. Our previous studies showed that the technique that better processes the heart failure typical telemonitoring-parameters is the Classification Tree. We therefore decided to analyze the data with its direct evolution that is the Random Forest algorithm. The results show an improvement both in accuracy and in limiting critical errors. PMID:24110416

  8. A preliminary study into performing routine tube output and automatic exposure control quality assurance using radiology information system data.

    PubMed

    Charnock, P; Jones, R; Fazakerley, J; Wilde, R; Dunn, A F

    2011-09-01

    Data are currently being collected from hospital radiology information systems in the North West of the UK for the purposes of both clinical audit and patient dose audit. Could these data also be used to satisfy quality assurance (QA) requirements according to UK guidance? From 2008 to 2009, 731 653 records were submitted from 8 hospitals from the North West England. For automatic exposure control QA, the protocol from Institute of Physics and Engineering in Medicine (IPEM) report 91 recommends that milliamperes per second can be monitored for repeatability and reproducibility using a suitable phantom, at 70-81 kV. Abdomen AP and chest PA examinations were analysed to find the most common kilovoltage used with these records then used to plot average monthly milliamperes per second with time. IPEM report 91 also recommends that a range of commonly used clinical settings is used to check output reproducibility and repeatability. For each tube, the dose area product values were plotted over time for two most common exposure factor sets. Results show that it is possible to do performance checks of AEC systems; however more work is required to be able to monitor tube output performance. Procedurally, the management system requires work and the benefits to the workflow would need to be demonstrated. PMID:21795255

  9. Assumptions Commonly Underlying Government Quality Assessment Practices

    ERIC Educational Resources Information Center

    Schmidtlein, Frank A.

    2004-01-01

    The current interest in governmental assessment and accountability practices appears to result from:(1) an emerging view of higher education as an "industry"; (2) concerns about efficient resource allocation; (3) a lack of trust ade between government institutional officials; (4) a desire to reduce uncertainty in government/higher education…

  10. Developing Quality Physical Education through Student Assessments

    ERIC Educational Resources Information Center

    Fisette, Jennifer L.; Placek, Judith H.; Avery, Marybell; Dyson, Ben; Fox, Connie; Franck, Marian; Graber, Kim; Rink, Judith; Zhu, Weimo

    2009-01-01

    The National Association of Sport and Physical Education (NASPE) is committed to providing teachers with the support and guiding principles for implementing valid assessments. Its goal is for physical educators to utilize PE Metrics to measure student learning based on the national standards. The first PE Metrics text provides teachers with…

  11. Quality assessment of plant transpiration water

    NASA Technical Reports Server (NTRS)

    Macler, Bruce A.; Janik, Daniel S.; Benson, Brian L.

    1990-01-01

    It has been proposed to use plants as elements of biologically-based life support systems for long-term space missions. Three roles have been brought forth for plants in this application: recycling of water, regeneration of air and production of food. This report discusses recycling of water and presents data from investigations of plant transpiration water quality. Aqueous nutrient solution was applied to several plant species and transpired water collected. The findings indicated that this water typically contained 0.3-6 ppm of total organic carbon, which meets hygiene water standards for NASA's space applications. It suggests that this method could be developed to achieve potable water standards.

  12. Towards real-time image quality assessment

    NASA Astrophysics Data System (ADS)

    Geary, Bobby; Grecos, Christos

    2011-03-01

    We introduce a real-time implementation and evaluation of a new fast accurate full reference based image quality metric. The popular general image quality metric known as the Structural Similarity Index Metric (SSIM) has been shown to be an effective, efficient and useful, finding many practical and theoretical applications. Recently the authors have proposed an enhanced version of the SSIM algorithm known as the Rotated Gaussian Discrimination Metric (RGDM). This approach uses a Gaussian-like discrimination function to evaluate local contrast and luminance. RGDM was inspired by an exploration of local statistical parameter variations in relation to variation of Mean Opinion Score (MOS) for a range of particular distortion types. In this paper we out-line the salient features of the derivation of RGDM and show how analyses of local statistics of distortion type necessitate variation in discrimination function width. Results on the LIVE image database show tight banding of RGDM metric value when plotted against mean opinion score indicating the usefulness of this metric. We then explore a number of strategies for algorithmic speed-up including the application of Integral Images for patch based computation optimisation, cost reduction for the evaluation of the discrimination function and general loop unrolling. We also employ fast Single Instruction Multiple Data (SIMD) intrinsics and explore data parallel decomposition on a multi-core Intel Processor.

  13. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  14. School Indoor Air Quality Assessment and Program Implementation.

    ERIC Educational Resources Information Center

    Prill, R.; Blake, D.; Hales, D.

    This paper describes the effectiveness of a three-step indoor air quality (IAQ) program implemented by 156 schools in the states of Washington and Idaho during the 2000-2001 school year. An experienced IAQ/building science specialist conducted walk-through assessments at each school. These assessments documented deficiencies and served as an…

  15. Floristic Quality Assessment Across the Nation: Status, Opportunities, and Challenges

    EPA Science Inventory

    Floristic Quality Assessment (FQA) will be considered in the USEPA National Wetland Condition Assessment (NWCA). FQA is a powerful tool to describe wetland ecological condition, and is based on Coefficients of Conservatism (CC) of individual native plant species. CCs rank sensiti...

  16. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  17. Quality Assessment of TPB-Based Questionnaires: A Systematic Review

    PubMed Central

    Oluka, Obiageli Crystal; Nie, Shaofa; Sun, Yi

    2014-01-01

    Objective This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. Methods A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. Results 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. Conclusion Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics) in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process. PMID:24722323

  18. Assessing and communicating data quality in policy-relevant research

    NASA Astrophysics Data System (ADS)

    Costanza, Robert; Funtowicz, Silvio O.; Ravetz, Jerome R.

    1992-01-01

    The quality of scientific information in policy-relevant fields of research is difficult to assess, and quality control in these important areas is correspondingly difficult to maintain. Frequently there are insufficient high-quality measurements for the presentation of the statistical uncertainty in the numerical estimates that are crucial to policy decisions. We propose and develop a grading system for numerical estimates that can deal with the full range of data quality—from statistically valid estimates to informed guesses. By analyzing the underlying quality of numerical estimates, summarized as spread and grade, we are able to provide simple rules whereby input data can be coded for quality, and these codings carried through arithmetical calculations for assessing the quality of model results. For this we use the NUSAP (numeral, unit, spread, assessment, pedigree) notational system. It allows the more quantitative and the more qualitative aspects of data uncertainty to be managed separately. By way of example, we apply the system to an ecosystem valuation study that used several different models and data of widely varying quality to arrive at a single estimate of the economic value of wetlands. The NUSAP approach illustrates the major sources of uncertainty in this study and can guide new research aimed at the improvement of the quality of outputs and the efficiency of the procedures.

  19. Quantitative statistical methods for image quality assessment.

    PubMed

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  20. Quantitative Statistical Methods for Image Quality Assessment

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  1. Assessing quality management in an R and D environment

    SciTech Connect

    Thompson, B.D.

    1998-02-01

    Los Alamos National Laboratory (LANL) is a premier research and development institution operated by the University of California for the US Department of Energy. Since 1991, LANL has pursued a heightened commitment to developing world-class quality in management and operations. In 1994 LANL adopted the Malcolm Baldrige National Quality Award criteria as a framework for all activities and initiated more formalized customer focus and quality management. Five measurement systems drive the current integration of quality efforts: an annual Baldrige-based assessment, a customer focus program, customer-driven performance measurement, an employee performance management system and annual employee surveys, and integrated planning processes with associated goals and measures.

  2. Quality assessment of protein model-structures based on structural and functional similarities

    PubMed Central

    2012-01-01

    Background Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. Results GOBA - Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. Conclusions The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best

  3. Assessing the Quality of Clinical Teachers

    PubMed Central

    Bolhuis, Sanneke; Grol, Richard; Laan, Roland; Wensing, Michel

    2010-01-01

    Background Learning in a clinical environment differs from formal educational settings and provides specific challenges for clinicians who are teachers. Instruments that reflect these challenges are needed to identify the strengths and weaknesses of clinical teachers. Objective To systematically review the content, validity, and aims of questionnaires used to assess clinical teachers. Data Sources MEDLINE, EMBASE, PsycINFO and ERIC from 1976 up to March 2010. Review Methods The searches revealed 54 papers on 32 instruments. Data from these papers were documented by independent researchers, using a structured format that included content of the instrument, validation methods, aims of the instrument, and its setting. Results Aspects covered by the instruments predominantly concerned the use of teaching strategies (included in 30 instruments), supporter role (29), role modeling (27), and feedback (26). Providing opportunities for clinical learning activities was included in 13 instruments. Most studies referred to literature on good clinical teaching, although they failed to provide a clear description of what constitutes a good clinical teacher. Instrument length varied from 1 to 58 items. Except for two instruments, all had to be completed by clerks/residents. Instruments served to provide formative feedback ( instruments) but were also used for resource allocation, promotion, and annual performance review (14 instruments). All but two studies reported on internal consistency and/or reliability; other aspects of validity were examined less frequently. Conclusions No instrument covered all relevant aspects of clinical teaching comprehensively. Validation of the instruments was often limited to assessment of internal consistency and reliability. Available instruments for assessing clinical teachers should be used carefully, especially for consequential decisions. There is a need for more valid comprehensive instruments. Electronic supplementary material The online

  4. Acoustic Analysis of Inhaler Sounds From Community-Dwelling Asthmatic Patients for Automatic Assessment of Adherence

    PubMed Central

    D'arcy, Shona; Costello, Richard W.

    2014-01-01

    Inhalers are devices which deliver medication to the airways in the treatment of chronic respiratory diseases. When used correctly inhalers relieve and improve patients' symptoms. However, adherence to inhaler medication has been demonstrated to be poor, leading to reduced clinical outcomes, wasted medication, and higher healthcare costs. There is a clinical need for a system that can accurately monitor inhaler adherence as currently no method exists to evaluate how patients use their inhalers between clinic visits. This paper presents a method of automatically evaluating inhaler adherence through acoustic analysis of inhaler sounds. An acoustic monitoring device was employed to record the sounds patients produce while using a Diskus dry powder inhaler, in addition to the time and date patients use the inhaler. An algorithm was designed and developed to automatically detect inhaler events from the audio signals and provide feedback regarding patient adherence. The algorithm was evaluated on 407 audio files obtained from 12 community dwelling asthmatic patients. Results of the automatic classification were compared against two expert human raters. For patient data for whom the human raters Cohen's kappa agreement score was \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${>}{0.81}$\\end{document}, results indicated that the algorithm's accuracy was 83% in determining the correct inhaler technique score compared with the raters. This paper has several clinical implications as it demonstrates the feasibility of using acoustics to objectively monitor patient inhaler adherence and provide real-time personalized medical care for a chronic respiratory illness. PMID:27170883

  5. Automatic Blocked Roads Assessment after Earthquake Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.; Hosseini-Zirdoo, E.; Eslamizade, F.

    2015-12-01

    In 2010, an earthquake in the city of Port-au-Prince, Haiti, happened quite by chance an accident and killed over 300000 people. According to historical data such an earthquake has not occurred in the area. Unpredictability of earthquakes has necessitated the need for comprehensive mitigation efforts to minimize deaths and injuries. Blocked roads, caused by debris of destroyed buildings, may increase the difficulty of rescue activities. In this case, a damage map, which specifies blocked and unblocked roads, can be definitely helpful for a rescue team. In this paper, a novel method for providing destruction map based on pre-event vector map and high resolution world view II satellite images after earthquake, is presented. For this purpose, firstly in pre-processing step, image quality improvement and co-coordination of image and map are performed. Then, after extraction of texture descriptor from the image after quake and SVM classification, different terrains are detected in the image. Finally, considering the classification results, specifically objects belong to "debris" class, damage analysis are performed to estimate the damage percentage. In this case, in addition to the area objects in the "debris" class their shape should also be counted. The aforementioned process are performed on all the roads in the road layer.In this research, pre-event digital vector map and post-event high resolution satellite image, acquired by Worldview-2, of the city of Port-au-Prince, Haiti's capital, were used to evaluate the proposed method. The algorithm was executed on 1200×800 m2 of the data set, including 60 roads, and all the roads were labelled correctly. The visual examination have authenticated the abilities of this method for damage assessment of urban roads network after an earthquake.

  6. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    NASA Astrophysics Data System (ADS)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  7. Constructing Assessment Model of Primary and Secondary Educational Quality with Talent Quality as the Core Standard

    ERIC Educational Resources Information Center

    Chen, Benyou

    2014-01-01

    Quality is the core of education and it is important to standardization construction of primary and secondary education in urban (U) and rural (R) areas. The ultimate goal of the integration of urban and rural education is to pursuit quality urban and rural education. Based on analysing the related policy basis and the existing assessment models…

  8. Service Quality Assessment Scale (SQAS): An Instrument for Evaluating Service Quality of Health-Fitness Clubs

    ERIC Educational Resources Information Center

    Lam, Eddie T. C.; Zhang, James J.; Jensen, Barbara E.

    2005-01-01

    This study was designed to develop the Service Quality Assessment Scale to evaluate the service quality of health-fitness clubs. Through a review of literature, field observations, interviews, modified application of the Delphi technique, and a pilot study, a preliminary scale with 46 items was formulated. The preliminary scale was administered to…

  9. Assessing Quality in Graduate Programs: An Internal Quality Indicator. AIR Forum 1981 Paper.

    ERIC Educational Resources Information Center

    DiBiasio, Daniel A.; And Others

    Four approaches to measuring quality in graduate education are reviewed, and the approach used at the graduate school at Ohio State University is assessed. Four approaches found in the literature are: measuring quality by reputation, by scholarly productivity, by correlating reputation and scholarly productivity, and by multiple measures. Ohio…

  10. Poster — Thur Eve — 70: Automatic lung bronchial and vessel bifurcations detection algorithm for deformable image registration assessment

    SciTech Connect

    Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane; Chav, Ramnada; De Guise, Jacques

    2014-08-15

    Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm based on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.

  11. Assessment of mesh simplification algorithm quality

    NASA Astrophysics Data System (ADS)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  12. Methods for assessing the quality of runoff from Minnesota peatlands

    SciTech Connect

    Clausen, J.C.

    1981-01-01

    The quality of runoff from large, undisturbed peatlands in Minnesota is chaaracterized and sampling results from a number of bogs (referred to as a multiple watershed approach) was used to assess the effects of peat mining on the quality of bog runoff. Runoff from 45 natural peatlands and one mined bog was sampled five times in 1979-80 and analyzed for 34 water quality characteristics. Peatland watersheds were classified as bog, transition, or fen, based upon both water quality and watershed characteristics. Alternative classification methods were based on frequency distributions, cluster analysis, discriminant analysis, and principal component analysis results. A multiple watershed approach was used as a basis of drawing inferences regarding the quality of runoff from a representative sample of natural bogs and a mined bog. The multiple watershed technique applied provides an alternative to long-term paired watershed experiments in evaluating the effects of land use activities on the quality of runoff from peatlands in Minnesota.

  13. Assessing the quality of a student-generated question repository

    NASA Astrophysics Data System (ADS)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-12-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions. This work presents the first systematic investigation into the quality of student produced assessment material in an introductory physics context, and thus complements and extends related studies in other disciplines.

  14. Cloud-Based Smart Health Monitoring System for Automatic Cardiovascular and Fall Risk Assessment in Hypertensive Patients.

    PubMed

    Melillo, P; Orrico, A; Scala, P; Crispino, F; Pecchia, L

    2015-10-01

    The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls. PMID:26276015

  15. Development of a 3D optical scanning-based automatic quality assurance system for proton range compensators

    SciTech Connect

    Kim, MinKyu; Ju, Sang Gyu E-mail: doho.choi@samsung.com; Chung, Kwangzoo; Hong, Chae-Seon; Kim, Jinsung; Ahn, Sung Hwan; Jung, Sang Hoon; Han, Youngyih; Chung, Yoonsun; Cho, Sungkoo; Choi, Doo Ho E-mail: doho.choi@samsung.com; Kim, Jungkuk; Shin, Dongho

    2015-02-15

    Purpose: A new automatic quality assurance (AutoRCQA) system using a three-dimensional scanner (3DS) with system automation was developed to improve the accuracy and efficiency of the quality assurance (QA) procedure for proton range compensators (RCs). The system performance was evaluated for clinical implementation. Methods: The AutoRCQA system consists of a three-dimensional measurement system (3DMS) based on 3DS and in-house developed verification software (3DVS). To verify the geometrical accuracy, the planned RC data (PRC), calculated with the treatment planning system (TPS), were reconstructed and coregistered with the measured RC data (MRC) based on the beam isocenter. The PRC and MRC inner surfaces were compared with composite analysis (CA) using 3DVS, using the CA pass rate for quantitative analysis. To evaluate the detection accuracy of the system, the authors designed a fake PRC by artificially adding small cubic islands with side lengths of 1.5, 2.5, and 3.5 mm on the inner surface of the PRC and performed CA with the depth difference and distance-to-agreement tolerances of [1 mm, 1 mm], [2 mm, 2 mm], and [3 mm, 3 mm]. In addition, the authors performed clinical tests using seven RCs [computerized milling machine (CMM)-RCs] manufactured by CMM, which were designed for treating various disease sites. The systematic offsets of the seven CMM-RCs were evaluated through the automatic registration function of AutoRCQA. For comparison with conventional technique, the authors measured the thickness at three points in each of the seven CMM-RCs using a manual depth measurement device and calculated thickness difference based on the TPS data (TPS-manual measurement). These results were compared with data obtained from 3DVS. The geometrical accuracy of each CMM-RC inner surface was investigated using the TPS data by performing CA with the same criteria. The authors also measured the net processing time, including the scan and analysis time. Results: The Auto

  16. Analysis of results obtained using the automatic chemical control of the quality of the water heat carrier in the drum boiler of the Ivanovo CHP-3 power plant

    NASA Astrophysics Data System (ADS)

    Larin, A. B.; Kolegov, A. V.

    2012-10-01

    Results of industrial tests of the new method used for the automatic chemical control of the quality of boiler water of the drum-type power boiler ( P d = 13.8 MPa) are described. The possibility of using an H-cationite column for measuring the electric conductivity of an H-cationized sample of boiler water over a long period of time is shown.

  17. Human Variome Project Quality Assessment Criteria for Variation Databases.

    PubMed

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. PMID:26919176

  18. Quality, management, and the interplay of self-assessment, process assessments, and performance-based observations

    NASA Astrophysics Data System (ADS)

    Willett, D. J.

    1993-04-01

    In this document, the author presents his observations on the topic of quality assurance (QA). Traditionally the focus of quality management has been on QA organizations, manuals, procedures, audits, and assessments; quality was measured by the degree of conformance to specifications or standards. Today quality is defined as satisfying user needs and is measured by user satisfaction. The author proposes that quality is the responsibility of line organizations and staff and not the responsibility of the QA group. This work outlines an effective Conduct of Operations program. The author concludes his observations with a discussion of how quality is analogous to leadership.

  19. The Automatic Assessment of Free Text Answers Using a Modified BLEU Algorithm

    ERIC Educational Resources Information Center

    Noorbehbahani, F.; Kardan, A. A.

    2011-01-01

    e-Learning plays an undoubtedly important role in today's education and assessment is one of the most essential parts of any instruction-based learning process. Assessment is a common way to evaluate a student's knowledge regarding the concepts related to learning objectives. In this paper, a new method for assessing the free text answers of…

  20. [The importance of assessing the "quality of life" in surgical interventions for critical lower limb ischaemia].

    PubMed

    Papp, László

    2004-02-01

    'Patency' and 'limb salvage' are not automatically valid parameters when the functional outcome of treatment for critical limb ischaemia is assessed. In a small number of patients the functional result is not favourable despite the anatomical patency and limb salvage. The considerable investment of human/financial resources in the treatment of these patients is retrospectively questionable in such cases. Quality of Life questionnaires give valuable information on the functional outcome of any means of treatment for critical ischaemia. The problem with the generic tools in one particular sub-group of patients is the reliability and validity of the tests. The first disease-specific test in critical limb ischaemia is the King's College Vascular Quality of Life (VascuQoL) Questionnaire. Its use is recommended in patients with critical lower limb ischaemia. It is very useful for scientific reporting and is able to show retrospectively that particular group of patients in whom the technical success of the treatment did not result in improvement in quality of life. In general practice the use of the questionnaire can decrease the factor of subjectivity in the assessment of the current status of a patient with newly diagnosed or previously treated critical ischaemia. PMID:15270520

  1. Quality Assessment and Physicochemical Characteristics of Bran Enriched Chapattis

    PubMed Central

    Dar, B. N.; Sharma, Savita; Singh, Baljit; Kaur, Gurkirat

    2014-01-01

    Cereal brans singly and in combination were blended at varying levels (5 and 10%) for development of Chapattis. Cereal bran enriched Chapattis were assessed for quality and physicochemical characteristics. On the basis of quality assessment, 10% enrichment level for Chapatti was the best. Moisture content, water activity, and free fatty acids remained stable during the study period. Quality assessment and physicochemical characteristics of bran enriched Chapattis carried out revealed that dough handling and puffing of bran enriched Chapattis prepared by 5 and 10% level of bran supplementation did not vary significantly. All types of bran enriched Chapattis except rice bran enriched Chapattis showed nonsticky behavior during dough handling. Bran enriched Chapattis exhibited full puffing character during preparation. The sensory attributes showed that both 5 and 10% bran supplemented Chapattis were acceptable. PMID:26904644

  2. Procedure for assessing visual quality for landscape planning and management

    NASA Astrophysics Data System (ADS)

    Gimblett, H. Randal; Fitzgibbon, John E.; Bechard, Kevin P.; Wightman, J. A.; Itami, Robert M.

    1987-07-01

    Incorporation of aesthetic considerations in the process of landscape planning and development has frequently met with poor results due to its lack of theoretical basis, public involvement, and failure to deal with spatial implications. This problem has been especially evident when dealing with large areas, for example, the Adirondacks, Scenic Highways, and National Forests and Parks. This study made use of public participation to evaluate scenic quality in a portion of the Niagara Escarpment in Southern Ontario, Canada. The results of this study were analyzed using the visual management model proposed by Brown and Itami (1982) as a means of assessing and evaluating scenic quality. The map analysis package formulated by Tomlin (1980) was then applied to this assessment for the purpose of spatial mapping of visual impact. The results of this study illustrate that it is possible to assess visual quality for landscape/management, preservation, and protection using a theoretical basis, public participation, and a systematic spatial mapping process.

  3. Assessment of permeation quality of concrete through mercury intrusion porosimetry

    SciTech Connect

    Kumar, Rakesh; Bhattacharjee, B

    2004-02-01

    Permeation quality of laboratory cast concrete beams was determined through initial surface absorption test (ISAT). The pore system characteristics of the same concrete beam specimens were determined through mercury intrusion porosimetry (MIP). Data so obtained on the measured initial surface absorption rate of water by concrete and characteristics of pore system of concrete estimated from porosimetry results were used to develop correlations between them. Through these correlations, potential of MIP in assessing the durability quality of concrete in actual structure is demonstrated.

  4. Space Shuttle flying qualities and flight control system assessment study

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, D.

    1982-01-01

    The suitability of existing and proposed flying quality and flight control system criteria for application to the space shuttle orbiter during atmospheric flight phases was assessed. An orbiter experiment for flying qualities and flight control system design criteria is discussed. Orbiter longitudinal and lateral-directional flying characteristics, flight control system lag and time delay considerations, and flight control manipulator characteristics are included. Data obtained from conventional aircraft may be inappropriate for application to the shuttle orbiter.

  5. Assessment of foodservice quality and identification of improvement strategies using hospital foodservice quality model

    PubMed Central

    Kim, Kyungjoo; Kim, Minyoung

    2010-01-01

    The purposes of this study were to assess hospital foodservice quality and to identify causes of quality problems and improvement strategies. Based on the review of literature, hospital foodservice quality was defined and the Hospital Foodservice Quality model was presented. The study was conducted in two steps. In Step 1, nutritional standards specified on diet manuals and nutrients of planned menus, served meals, and consumed meals for regular, diabetic, and low-sodium diets were assessed in three general hospitals. Quality problems were found in all three hospitals since patients consumed less than their nutritional requirements. Considering the effects of four gaps in the Hospital Foodservice Quality model, Gaps 3 and 4 were selected as critical control points (CCPs) for hospital foodservice quality management. In Step 2, the causes of the gaps and improvement strategies at CCPs were labeled as "quality hazards" and "corrective actions", respectively and were identified using a case study. At Gap 3, inaccurate forecasting and a lack of control during production were identified as quality hazards and corrective actions proposed were establishing an accurate forecasting system, improving standardized recipes, emphasizing the use of standardized recipes, and conducting employee training. At Gap 4, quality hazards were menus of low preferences, inconsistency of menu quality, a lack of menu variety, improper food temperatures, and patients' lack of understanding of their nutritional requirements. To reduce Gap 4, the dietary departments should conduct patient surveys on menu preferences on a regular basis, develop new menus, especially for therapeutic diets, maintain food temperatures during distribution, provide more choices, conduct meal rounds, and provide nutrition education and counseling. The Hospital Foodservice Quality Model was a useful tool for identifying causes of the foodservice quality problems and improvement strategies from a holistic point of view

  6. Machine learning approach for objective inpainting quality assessment

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Agaian, S.; Egiazarian, K.

    2014-05-01

    This paper focuses on a machine learning approach for objective inpainting quality assessment. Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. Quantitative metrics for successful image inpainting currently do not exist; researchers instead are relying upon qualitative human comparisons in order to evaluate their methodologies and techniques. We present an approach for objective inpainting quality assessment based on natural image statistics and machine learning techniques. Our method is based on observation that when images are properly normalized or transferred to a transform domain, local descriptors can be modeled by some parametric distributions. The shapes of these distributions are different for noninpainted and inpainted images. Approach permits to obtain a feature vector strongly correlated with a subjective image perception by a human visual system. Next, we use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value repeatably correlates with a qualitative opinion in a human observer study.

  7. Recreational stream assessment using Malaysia water quality index

    NASA Astrophysics Data System (ADS)

    Ibrahim, Hanisah; Kutty, Ahmad Abas

    2013-11-01

    River water quality assessment is crucial in order to quantify and monitor spatial and temporally. Malaysia is producing WQI and NWQS indices to evaluate river water quality. However, the study on recreational river water quality is still scarce. A study was conducted to determine selected recreational river water quality area and to determine impact of recreation on recreational stream. Three recreational streams namely Sungai Benus, Sungai Cemperuh and Sungai Luruh in Janda Baik, Pahang were selected. Five sampling stations were chosen from each river with a 200-400 m interval. Six water quality parameters which are BOD5, COD, TSS, pH, ammoniacal-nitrogen and dissolved oxygen were measured. Sampling and analysis was conducted following standard method prepared by USEPA. These parameters were used to calculate the water quality subindex and finally an indicative WQI value using Malaysia water quality index formula. Results indicate that all recreational streams have excellent water quality with WQI values ranging from 89 to 94. Most of water quality parameter was homogenous between sampling sites and between streams. An one-way ANOVA test indicates that no significant difference was observed between each sub index values (p> 0.05, α=0.05). Only BOD and COD exhibit slightly variation between stations that would be due to organic domestic wastes done by visitors. The study demonstrated that visitors impact on recreational is minimum and recreation streams are applicable for direct contact recreational.

  8. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    NASA Astrophysics Data System (ADS)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  9. Assessment of phytoplankton diversity as an indicator of water quality

    SciTech Connect

    Yergeau, S.E.; Lang, A.; Teeters, R.

    1997-08-01

    For the measurement of water quality in freshwater systems, there are established indices using macroinvertebrate larvae. There is no such comparable measure for marine and estuarine environments. A phytoplankton diversity index (PDI), whose basic form was conceived by Dr. Ruth Gyure of Save the Sound, Inc., is being investigated as a possible candidate to rectify this situation. Phytoplankton were chosen as the indicators of water quality since algae have short generation times and respond quickly to changing water quality conditions. The methodologies involved in this initial assessment of the PDI are incorporated into the Adopt-a-Harbor water quality monitoring program and its associated laboratory. The virtues of the procedures are that they are simple and quick to use, suitable for trained volunteers to carry out, easily reproducible, and amenable to quality assurance checks.

  10. Assessment of voice quality: Current state-of-the-art.

    PubMed

    Barsties, Ben; De Bodt, Marc

    2015-06-01

    Voice quality is not clearly defined but it can be concluded that it is a multidimensional perceived construct. Therefore, there are broadly two approaches to measure voice quality: (1) subjective measurements to score a client's voice that reflects his or her judgment of the voice and (2) objective measurements by applying specific algorithm to quantify certain aspects of a correlate of vocal production. This paper proposes a collection and discusses a number of critical issues of the current state-of-the-art in voice quality assessments of auditory-perceptual judgment, objective-acoustic analysis and aerodynamic measurements in clinical practice and research that maybe helpful for clinicians and researchers. PMID:25440411

  11. New image quality assessment method using wavelet leader pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolin; Yang, Xiaokang; Zheng, Shibao; Lin, Weiyao; Zhang, Rui; Zhai, Guangtao

    2011-06-01

    In this paper, we propose a wave leader pyramids based Visual Information Fidelity method for image quality assessment. Motivated by the observations that the human vision systems (HVS) are more sensitive to edge and contour regions and that the human visual sensitivity varies with spatial frequency, we first introduce the two-dimensional wavelet leader pyramids to robustly extract the multiscale information of edges. Based on the wavelet leader pyramids, we further propose a visual information fidelity metric to evaluate the quality of images by quantifying the information loss between the original and the distorted images. Experimental results show that our method outperforms many state-of-the-art image quality metrics.

  12. Myocardial Iron Loading Assessment by Automatic Left Ventricle Segmentation with Morphological Operations and Geodesic Active Contour on T2* images

    PubMed Central

    Luo, Yun-gang; Ko, Jacky KL; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie CW; Wang, Defeng

    2015-01-01

    Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading. PMID:26215336

  13. Myocardial Iron Loading Assessment by Automatic Left Ventricle Segmentation with Morphological Operations and Geodesic Active Contour on T2* images

    NASA Astrophysics Data System (ADS)

    Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng

    2015-07-01

    Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.

  14. Image enhancement and quality measures for dietary assessment using mobile devices

    NASA Astrophysics Data System (ADS)

    Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2012-03-01

    Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods.

  15. Assessment of visual landscape quality using IKONOS imagery.

    PubMed

    Ozkan, Ulas Yunus

    2014-07-01

    The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite

  16. How to assess the quality of your analytical method?

    PubMed

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees. PMID:26408611

  17. Technology-Enhanced Assessment of Math Fact Automaticity: Patterns of Performance for Low- and Typically Achieving Students

    ERIC Educational Resources Information Center

    Stickney, Eric M.; Sharp, Lindsay B.; Kenyon, Amanda S.

    2012-01-01

    Because math fact automaticity has been identified as a key barrier for students struggling with mathematics, we examined how initial math achievement levels influenced the path to automaticity (e.g., variation in number of attempts, speed of retrieval, and skill maintenance over time) and the relation between attainment of automaticity and gains…

  18. Analysis of quality raw data of second generation sequencers with Quality Assessment Software

    PubMed Central

    2011-01-01

    Background Second generation technologies have advantages over Sanger; however, they have resulted in new challenges for the genome construction process, especially because of the small size of the reads, despite the high degree of coverage. Independent of the program chosen for the construction process, DNA sequences are superimposed, based on identity, to extend the reads, generating contigs; mismatches indicate a lack of homology and are not included. This process improves our confidence in the sequences that are generated. Findings We developed Quality Assessment Software, with which one can review graphs showing the distribution of quality values from the sequencing reads. This software allow us to adopt more stringent quality standards for sequence data, based on quality-graph analysis and estimated coverage after applying the quality filter, providing acceptable sequence coverage for genome construction from short reads. Conclusions Quality filtering is a fundamental step in the process of constructing genomes, as it reduces the frequency of incorrect alignments that are caused by measuring errors, which can occur during the construction process due to the size of the reads, provoking misassemblies. Application of quality filters to sequence data, using the software Quality Assessment, along with graphing analyses, provided greater precision in the definition of cutoff parameters, which increased the accuracy of genome construction. PMID:21501521

  19. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  20. Automatic assessment of a two-phase structure in the duplex stainless-steel SAF 2205

    SciTech Connect

    Komenda, J. ); Sandstroem, R. )

    1993-10-01

    Automatic image analysis was used to study the effect of deformation on the size and distribution of the austenite and ferrite phases in the duplex stainless steel SAF 2205 (22Cr-5Ni-3Mo-15N). The main parameters used were the chord size to characterize the ferrite phase and Feret's diameter for the austenite phase. As the deformation increased, ferrite bands became more elongated and thinner, contributing to a pronounced banding. The amount of banding can be quantified by using a ratio between the slopes of the chord size distributions in the longitudinal and short transverse directions. According to a proposed model of the influence of deformation on the two-phase structure, the process of austenite elongation and subdivision of austenite islands (crushing) is described. The effect of deformation on the yield and tensile strength was expressed using a Hall-Petch type relationship where the grain size was represented by the average width of the ferrite bands. The observed anisotropy in strength properties is believed to be due to texture hardening. Because elongation at a given strength level is the same in both the longitudinal and transverse directions, the banding itself does not influence the ductility. Nor can the strength anisotropy be due to banding, because the strength is greater in the longitudinal than in the transverse direction.

  1. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports

    PubMed Central

    Nguyen, Anthony N.; Moore, Julie; O’Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  2. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports.

    PubMed

    Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  3. Total Quality Management in Higher Education: A Critical Assessment.

    ERIC Educational Resources Information Center

    Seymour, Daniel; Collett, Casey

    This study attempted a comprehensive, critical assessment of Total Quality Management (TQM) initiatives in higher education. A survey of 25 institutions (including community colleges, private four-year colleges and universities and public colleges) with experience with TQM was developed and used. The survey utilized , attitude scales designed to…

  4. The Achilles' Heel of Quality: The Assessment of Student Learning.

    ERIC Educational Resources Information Center

    Knight, Peter T.

    2002-01-01

    Explores the dependability of assessments of student achievement when used for internal and external quality monitoring (IQM and EQM). Identifies problems and suggests responses, including a radical approach based on accepting that reliable national data about complex student achievements are not available. Asserts that reliance on EQM is unwise…

  5. Miniature spinning as a fiber quality assessment tool

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Miniature spinning has long been used to assess cotton varieties in a timely manner. It has been an accepted fact that the quality of miniature spinning is less than optimal, but that it allows a direct comparison between cottons during varietal studies. Recently, researchers have made processing ...

  6. Quality Control Charts in Large-Scale Assessment Programs

    ERIC Educational Resources Information Center

    Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying

    2011-01-01

    There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…

  7. Incorporating Contaminant Bioavailability into Sediment Quality Assessment Frameworks

    EPA Science Inventory

    The recently adopted sediment quality assessment framework for evaluating bay and estuarine sediments in the State of California incorporates bulk sediment chemistry as a key line of evidence(LOE) but does not address the bioavailability of measured contaminants. Thus, the chemis...

  8. A Methodological Proposal for Learning Games Selection and Quality Assessment

    ERIC Educational Resources Information Center

    Dondi, Claudio; Moretti, Michela

    2007-01-01

    This paper presents a methodological proposal elaborated in the framework of two European projects dealing with game-based learning, both of which have focused on "quality" aspects in order to create suitable tools that support European educators, practitioners and lifelong learners in selecting and assessing learning games for use in teaching and…

  9. Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence

    ERIC Educational Resources Information Center

    Bianchini, Stefano

    2014-01-01

    This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…

  10. Quality of Feedback Following Performance Assessments: Does Assessor Expertise Matter?

    ERIC Educational Resources Information Center

    Govaerts, Marjan J. B.; van de Wiel, Margje W. J.; van der Vleuten, Cees P. M.

    2013-01-01

    Purpose: This study aims to investigate quality of feedback as offered by supervisor-assessors with varying levels of assessor expertise following assessment of performance in residency training in a health care setting. It furthermore investigates if and how different levels of assessor expertise influence feedback characteristics.…

  11. Quality Assured Assessment Processes: Evaluating Staff Response to Change

    ERIC Educational Resources Information Center

    Malau-Aduli, Bunmi S.; Zimitat, Craig; Malau-Aduli, Aduli E. O.

    2011-01-01

    Medical education is not exempt from the increasing societal expectations of accountability and this is evidenced by an increasing number of litigation cases by students who are dissatisfied with their assessment. The time and monetary costs of student appeals makes it imperative that medical schools adopt robust quality assured assessment…

  12. The Assessment of Service Quality in Higher Education.

    ERIC Educational Resources Information Center

    Delene, Linda; Bunda, Mary Anne

    This paper presents a market driven model for assessing the service quality of support services in higher education, primarily for United States institutions, by examining higher education within the context of a complex service industry. The paper begins by explaining the development of the model and its implications for service management. Next,…

  13. Quality Assessment Parameters for Student Support at Higher Education Institutions

    ERIC Educational Resources Information Center

    Sajiene, Laima; Tamuliene, Rasa

    2012-01-01

    The research presented in this article aims to validate quality assessment parameters for student support at higher education institutions. Student support is discussed as the system of services provided by a higher education institution which helps to develop student-centred curriculum and fulfils students' emotional, academic, social needs, and…

  14. Quality and Assessment in Context: A Brief Review

    ERIC Educational Resources Information Center

    Koslowski, Fred A., III

    2006-01-01

    Purpose: The purpose of this paper is to provide a general review for USA and international academic faculty and administrators of the dominant themes and history of quality and assessment in both industry and Higher Education, and how they relate to each other in order to stimulate and encourage debate as well as influence policy.…

  15. Portable Imagery Quality Assessment Test Field for Uav Sensors

    NASA Astrophysics Data System (ADS)

    Dąbrowski, R.; Jenerowicz, A.

    2015-08-01

    Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.

  16. A research review of quality assessment for software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.

  17. Colonoscopy video quality assessment using hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.

  18. Display device-adapted video quality-of-experience assessment

    NASA Astrophysics Data System (ADS)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  19. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol. PMID:25604967

  20. Assessing local resources and culture before instituting quality improvement projects.

    PubMed

    Hawkins, C Matthew

    2014-12-01

    The planning phases of quality improvement projects are commonly overlooked. Disorganized planning and implementation can escalate chaos, intensify resistance to change, and increase the likelihood of failure. Two important steps in the planning phase are (1) assessing local resources available to aid in the quality improvement project and (2) evaluating the culture in which the desired change is to be implemented. Assessing local resources includes identifying and engaging key stakeholders and evaluating if appropriate expertise is available for the scope of the project. This process also involves engaging informaticists and gathering available IT tools to plan and automate (to the extent possible) the data-gathering, analysis, and feedback steps. Culture in a department is influenced by the ability and willingness to manage resistance to change, build consensus, span boundaries between stakeholders, and become a learning organization. Allotting appropriate time to perform these preparatory steps will increase the odds of successfully performing a quality improvement project and implementing change. PMID:25467724

  1. [The assessment of bone quality in lifestyle-related diseases].

    PubMed

    Yamauchi, Mika

    2016-01-01

    Type 2 diabetes mellitus(DM)and other lifestyle-related diseases are associated with an increased risk of bone quality deterioration-type osteoporosis. The deterioration of bone quality in type 2 DM involves factors such as qualitative changes of collagens, reduction in bone turnover, narrow cortical bone diameter, increased cortical bone porosity, and destruction of trabecular bone microarchitecture. In mild to moderate chronic kidney disease and chronic obstructive pulmonary disease, the factors involved are thought to be hyperhomocysteinemia and deterioration of trabecular bone microarchitecture as well as cortical bone structure. Investigations of the usefulness of bone quality assessment using approaches such as the following are under way : biocheminal markers such as pentosidine and homocysteine, bone structure assessment methods such as hip structure analysis, trabecular bone score, and high-resolution peripheral quantitative computed tomography. PMID:26728532

  2. Water Quality Assessment of Ayeyarwady River in Myanmar

    NASA Astrophysics Data System (ADS)

    Thatoe Nwe Win, Thanda; Bogaard, Thom; van de Giesen, Nick

    2015-04-01

    Myanmar's socio-economic activities, urbanisation, industrial operations and agricultural production have increased rapidly in recent years. With the increase of socio-economic development and climate change impacts, there is an increasing threat on quantity and quality of water resources. In Myanmar, some of the drinking water coverage still comes from unimproved sources including rivers. The Ayeyarwady River is the main river in Myanmar draining most of the country's area. The use of chemical fertilizer in the agriculture, the mining activities in the catchment area, wastewater effluents from the industries and communities and other development activities generate pollutants of different nature. Therefore water quality monitoring is of utmost importance. In Myanmar, there are many government organizations linked to water quality management. Each water organization monitors water quality for their own purposes. The monitoring is haphazard, short term and based on individual interest and the available equipment. The monitoring is not properly coordinated and a quality assurance programme is not incorporated in most of the work. As a result, comprehensive data on the water quality of rivers in Myanmar is not available. To provide basic information, action is needed at all management levels. The need for comprehensive and accurate assessments of trends in water quality has been recognized. For such an assessment, reliable monitoring data are essential. The objective of our work is to set-up a multi-objective surface water quality monitoring programme. The need for a scientifically designed network to monitor the Ayeyarwady river water quality is obvious as only limited and scattered data on water quality is available. However, the set-up should also take into account the current socio-economic situation and should be flexible to adjust after first years of monitoring. Additionally, a state-of-the-art baseline river water quality sampling program is required which

  3. Supporting Students in C++ Programming Courses with Automatic Program Style Assessment

    ERIC Educational Resources Information Center

    Ala-Mutka, Kirsti; Uimonen, Toni; Jarvinen, Hannu-Matti

    2004-01-01

    Professional programmers need common coding conventions to assure co-operation and a degree of quality of the software. Novice programmers, however, easily forget issues of programming style in their programming coursework. In particular with large classes, students may pass several courses without learning elements of programming style. This is…

  4. Evaluating the Role of Content in Subjective Video Quality Assessment

    PubMed Central

    Vrgovic, Petar

    2014-01-01

    Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA. PMID:24523643

  5. Space Shuttle flying qualities and flight control system assessment

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Mcruer, D. T.; Johnston, D. E.

    1982-01-01

    This paper reviews issues, data, and analyses relevant to the longitudinal flying qualities of the Space Shuttle in approach and landing. The manual control of attitude and path are first examined theoretically to demonstrate the unconventional nature of the Shuttle's augmented pitch and path response characteristics. The time domain pitch rate transient response criterion used for design of the Shuttle flight control system is examined in context with data from recent flying qualities experiments and operational aircraft. Questions arising from this examination are addressed through comparisons with MIL-F-8785C and other proposed flying qualities criteria which indicate potential longitudinal flying qualities problems. However, it is shown that these criteria, based largely on data from conventional aircraft, may be inappropriate for assessing the Shuttle.

  6. Health-related quality of life assessment in clinical practice.

    PubMed

    Meers, C; Singer, M A

    1996-01-01

    Assessment of biochemical responses to therapy is routine in the management of patients with end stage renal disease (ESRD). Assessment of health-related quality of life (HRQOL), however, is less common. Previous research indicates that HRQOL is a meaningful indicator that should be integrated into clinical practice. HRQOL is longitudinally evaluated in in-centre hemodialysis patients using the RAND 36-item Health Survey 1.0. Caregivers incorporate scores from this instrument into their assessment of patient functioning and well-being. HRQOL scores can be utilized to evaluate responses to changes in therapy, and to direct clinical decision-making, adding an important dimension to holistic, quality care for ESRD patients. PMID:8900807

  7. Learning to rank for blind image quality assessment.

    PubMed

    Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong

    2015-10-01

    Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. PMID:25616080

  8. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints. PMID:19743409

  9. Quality assessment of systematic reviews on alveolar socket preservation.

    PubMed

    Moraschini, V; Barboza, E Dos S P

    2016-09-01

    The aim of this overview was to evaluate and compare the quality of systematic reviews, with or without meta-analysis, that have evaluated studies on techniques or biomaterials used for the preservation of alveolar sockets post tooth extraction in humans. An electronic search was conducted without date restrictions using the Medline/PubMed, Cochrane Library, and Web of Science databases up to April 2015. Eligibility criteria included systematic reviews, with or without meta-analysis, focused on the preservation of post-extraction alveolar sockets in humans. Two independent authors assessed the quality of the included reviews using AMSTAR and the checklist proposed by Glenny et al. in 2003. After the selection process, 12 systematic reviews were included. None of these reviews obtained the maximum score using the quality assessment tools implemented, and the results of the analyses were highly variable. A significant statistical correlation was observed between the scores of the two checklists. A wide structural and methodological variability was observed between the systematic reviews published on the preservation of alveolar sockets post tooth extraction. None of the reviews evaluated obtained the maximum score using the two quality assessment tools implemented. PMID:27061478

  10. Using the data quality objectives process in risk assessment

    SciTech Connect

    Not Available

    1994-07-01

    The Environmental Protection Agency`s Quality Assurance Management Staff has developed a systematic process, the Data Quality Objectives (DQO) process, as an important tool for assisting project managers and planners in determining the type, quantity, and quality of environmental data sufficient for environmental decision-making. This information brief presents the basic concepts of, and information requirements for using the DQO process to plan the collection of the necessary environmental data, to support the performance of human health risk assessments under the Comprehensive Environmental Response, Compensation and Liability Act or Resource Conservation and Recovery Act. The goal of the DQO process is to identify the type, quality, and quantity of data required to support remedial action decisions which are based on risk assessment and its associated uncertainties. The DQO process consists of a number of discrete steps. These steps include a statement of the problem and the decision to be made, identifying inputs to the decision, developing a decision rule, and optimizing the design for data collection. In defining the data for input into the decision, a Site Conceptual Exposure Model should be developed to identify the existing or potential complete exposure pathways. To determine the data quality for use din the risk assessment, the DQO team must assist the decision-maker to define the acceptable level of uncertainty for making site-specific decisions. To determine the quantity of data needed, the DQO team utilizes the established target cleanup level, previously collected data and variability, and the acceptable errors. The results of the DQO process are qualitative and quantitative statements that define the scope of risk assessment data to be collected to support a defensible site risk management decision.

  11. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images.

    PubMed

    Medhi, Jyoti Prakash; Dandapat, Samarendra

    2016-07-01

    Prolonged diabetes causes severe damage to the vision through leakage of blood and blood constituents over the retina. The effect of the leakage becomes more threatening when these abnormalities involve the macula. This condition is known as diabetic maculopathy and it leads to blindness, if not treated in time. Early detection and proper diagnosis can help in preventing this irreversible damage. To achieve this, the possible way is to perform retinal screening at regular intervals. But the ratio of ophthalmologists to patients is very small and the process of evaluation is time consuming. Here, the automatic methods for analyzing retinal/fundus images prove handy and help the ophthalmologists to screen at a faster rate. Motivated from this aspect, an automated method for detection and analysis of diabetic maculopathy is proposed in this work. The method is implemented in two stages. The first stage involves preprocessing required for preparing the image for further analysis. During this stage the input image is enhanced and the optic disc is masked to avoid false detection during bright lesion identification. The second stage is maculopathy detection and its analysis. Here, the retinal lesions including microaneurysms, hemorrhages and exudates are identified by processing the green and hue plane color images. The macula and the fovea locations are determined using intensity property of processed red plane image. Different circular regions are thereafter marked in the neighborhood of the macula. The presence of lesions in these regions is identified to confirm positive maculopathy. Later, the information is used for evaluating its severity. The principal advantage of the proposed algorithm is, utilization of the relation of blood vessels with optic disc and macula, which enhances the detection process. Proper usage of various color plane information sequentially enables the algorithm to perform better. The method is tested on various publicly available databases

  12. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    PubMed Central

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung

  13. Thermo-optic quality assessment of doped optical ceramics

    NASA Astrophysics Data System (ADS)

    Willis, Christina C. C.; Bradford, Joshua D.; Maddox, Emily; Shah, Lawrence; Richardson, Martin

    2013-03-01

    The use of optical quality ceramics for laser applications is expanding, and with this expansion there is an increasing need for diagnostics to assess the quality of these materials. Ceramic material with flaws and contaminants yields significantly less efficient performance as laser gain media and can generate excessive amounts of waste heat. This is a concern that is especially relevant in high power laser applications where thermally induced damage can be catastrophic. In order to assess a set of ceramic and crystalline samples we induce and measure thermal lensing in order to produce a relative ranking based on the extent of the induced thermal lens. In these experiments thermal lensing is induced in a set of nine 10% Yb:YAG ceramic and single-crystal samples using a high power 940 nm diode, and their thermal response is measured using a Shack-Hartmann wavefront sensor. The materials are also ranked by their transmission in the visible region. Discrepancies between the two ranking methods reveal that transmission in the visible region alone is not adequate for an assessment of the overall quality of ceramic samples. The thermal lensing diagnostic technique proves to be a reliable and quick over-all assessment method of doped ceramic materials without requiring any a priori knowledge of material properties.

  14. Web Service for Positional Quality Assessment: the Wps Tier

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2015-08-01

    In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

  15. Quality assessment of altimeter data through tide gauge comparisons

    NASA Astrophysics Data System (ADS)

    Prandi, Pierre; Valladeau, Guillaume; Ablain, Michael; Picot, Nicolas; Desjonquères, Jean-Damien

    2015-04-01

    Since the first altimeter missions and the improvements performed in the accuracy of sea surface height measurements from 1992 onwards, the importance of global quality assessment of altimeter data has been increasing. Global CalVal studies usually assess this performance by the analysis of internal consistency and cross-comparison between all missions. The overall quality assessment of altimeter data can be performed by analyzing their internal consistency and the cross-comparison between all missions. As a complementary approach, tide gauge measurements are used as an external and independent reference to enable further quality assessment of the altimeter sea level and provide a better estimate of the multiple altimeter performances. In this way, both altimeter and tide gauge observations, dedicated to climate applications, require a rigorous quality control. The tide gauge time series considered in this study derive from several networks (GLOSS/CLIVAR, PSMSL, REFMAR) and provide sea-level heights with a physical content comparable with altimetry sea level estimates. Concerning altimeter data, the long-term drift assessment can be evaluated thanks to a widespread network of tide gauges. Thus, in-situ measurements are compared with altimeter sea level for the main altimeter missions. If altimeter time series are long enough, tide gauge data provide a relevant estimation of the global Mean Sea Level (MSL) drift calculated for all the missions. Moreover, comparisons with sea level products merging all the altimeter missions together have also been performed using several datasets, among which the AVISO delayed-time Sea Level Anomaly grids.

  16. New Tools for Quality Assessment of Modern Earthquake Catalogs: Examples From California and Japan.

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Wiemer, S.; Giardini, D.

    2002-12-01

    Earthquake catalogs provide a comprehensive knowledge database for studies related to seismicity, seismotectonic, earthquake physics, and hazard analysis. We introduce a set of tools and new software for improving the quality of modern catalogs of microseismicty. Surprisingly little research on detecting seismicity changes and analyzing the causes has been performed in recent years. Especially the discrimination between artificial and natural causes responsible for transients in seismicity, such as rate changes or alternations in the earthquake size distribution (b-value), often remains difficult. Thus, significant changes in reporting homogeneity are often detected only years after they occurred. We believe that our tools, used regularly and automatically in a ?real time mode?, allow addressing such problems shortly after they occurred. Based on our experience in analyzing earthquake catalogs, and building on the groundbreaking work by Habermann in the 1980?s, we propose a recipe for earthquake catalog quality assessment: 1) Decluster as a tool to homogenize the data; 2) Identify and remove blast contamination; 3) Estimate completeness as a function of space and time; 4) Assess reporting homogeneity as a function of space and time using self-consistency and, if possible, comparison with other independent data sources. During this sequence of analysis steps, we produce a series of maps that portray for a given period the magnitude of completeness, seismicity rate changes, possible shifts and stretches in the magnitude distribution and the degree of clustering. We apply our algorithms for quality assessment to data sets from California and Japan addressing the following questions: 1) Did the 1983 Coalinga earthquake change the rate of small events on the Parkfield segment of the San Andreas system? 2) Did the Kobe earthquake change the rate of earthquakes or the b-value in nearby volumes?

  17. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  18. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  19. Assessing the Quality of PhD Dissertations. A Survey of External Committee Members

    ERIC Educational Resources Information Center

    Kyvik, Svein; Thune, Taran

    2015-01-01

    This article reports on a study of the quality assessment of doctoral dissertations, and asks whether examiner characteristics influence assessment of research quality in PhD dissertations. Utilising a multi-dimensional concept of quality of PhD dissertations, we look at differences in assessment of research quality, and particularly test whether…

  20. Perceptual image quality assessment: recent progress and trends

    NASA Astrophysics Data System (ADS)

    Lin, Weisi; Narwaria, Manish

    2010-07-01

    Image quality assessment (IQA) is useful in many visual processing systems but challenging to perform in line with the human perception. A great deal of recent research effort has been directed towards IQA. In order to overcome the difficulty and infeasibility of subjective tests in many situations, the aim of such effort is to assess visual quality objectively towards better alignment with the perception of the Human Visual system (HVS). In this work, we review and analyze the recent progress in the areas related to IQA, as well as giving our views whenever possible. Following the recent trends, we discuss the engineering approach in more details, explore the related aspects for feature pooling, and present a case study with machine learning.

  1. Quality control and quality assurance plan for bridge channel-stability assessments in Massachusetts

    USGS Publications Warehouse

    Parker, Gene W.; Pinson, Harlow

    1993-01-01

    A quality control and quality assurance plan has been implemented as part of the Massachusetts bridge scour and channel-stability assessment program. This program is being conducted by the U.S. Geological Survey, Massachusetts-Rhode Island District, in cooperation with the Massachusetts Highway Department. Project personnel training, data-integrity verification, and new data-management technologies are being utilized in the channel-stability assessment process to improve current data-collection and management techniques. An automated data-collection procedure has been implemented to standardize channel-stability assessments on a regular basis within the State. An object-oriented data structure and new image management tools are used to produce a data base enabling management of multiple data object classes. Data will be reviewed by assessors and data base managers before being merged into a master bridge-scour data base, which includes automated data-verification routines.

  2. Effects of display rendering on HDR image quality assessment

    NASA Astrophysics Data System (ADS)

    Zerman, Emin; Valenzise, Giuseppe; De Simone, Francesca; Banterle, Francesco; Dufaux, Frederic

    2015-09-01

    High dynamic range (HDR) displays use local backlight modulation to produce both high brightness levels and large contrast ratios. Thus, the display rendering algorithm and its parameters may greatly affect HDR visual experience. In this paper, we analyze the impact of display rendering on perceived quality for a specific display (SIM2 HDR47) and for a popular application scenario, i.e., HDR image compression. To this end, we assess whether significant differences exist between subjective quality of compressed images, when these are displayed using either the built-in rendering of the display, or a rendering algorithm developed by ourselves. As a second contribution of this paper, we investigate whether the possibility to estimate the true pixel-wise luminance emitted by the display, offered by our rendering approach, can improve the performance of HDR objective quality metrics that require true pixel-wise luminance as input.

  3. Quality control for exposure assessment in epidemiological studies.

    PubMed

    Bornkessel, C; Blettner, M; Breckenkamp, J; Berg-Beckhoff, G

    2010-08-01

    In the framework of an epidemiological study, dosemeters were used for the assessment of radio frequency electromagnetic field exposure. To check the correct dosemeter's performance in terms of consistency of recorded field values over the entire study period, a quality control strategy was developed. In this paper, the concept of quality control and its results is described. From the 20 dosemeters used, 19 were very stable and reproducible, with deviations of a maximum of +/-1 dB compared with their initial state. One device was found to be faulty and its measurement data had to be excluded from the analysis. As a result of continuous quality control procedures, the confidence in the measurements obtained during the field work was strengthened significantly. PMID:20308051

  4. Assessment of quality of life in patients with knee osteoarthritis

    PubMed Central

    Kawano, Marcio Massao; Araújo, Ivan Luis Andrade; Castro, Martha Cavalcante; Matos, Marcos Almeida

    2015-01-01

    ABSTRACT OBJECTIVE : To assess the quality of life of knee osteoarthritis patients using the SF-36 questionnaire METHODS : Cross-sec-tional study with 93 knee osteoarthritis patients. The sample was categorized according to Ahlbӓck score. All individuals were interviewed with the SF-36 questionnaire RESULTS : The main finding of the study is related to the association of edu-cation level with the functional capacity, functional limitation and pain. Patients with higher education level had better functional capacity when they were compared to patients with basic level of education CONCLUSION : Individuals with osteoarthritis have a low perception of their quality of life in functional capacity, functional limitation and pain. There is a strong association between low level of education and low perception of quality of life. Level of Evidence IV, Clinical Case Series. PMID:27057143

  5. A novel, fuzzy-based air quality index (FAQI) for air quality assessment

    NASA Astrophysics Data System (ADS)

    Sowlat, Mohammad Hossein; Gharibi, Hamed; Yunesian, Masud; Tayefeh Mahmoudi, Maryam; Lotfi, Saeedeh

    2011-04-01

    The ever increasing level of air pollution in most areas of the world has led to development of a variety of air quality indices for estimation of health effects of air pollution, though the indices have their own limitations such as high levels of subjectivity. Present study, therefore, aimed at developing a novel, fuzzy-based air quality index (FAQI ) to handle such limitations. The index developed by present study is based on fuzzy logic that is considered as one of the most common computational methods of artificial intelligence. In addition to criteria air pollutants (i.e. CO, SO 2, PM 10, O 3, NO 2), benzene, toluene, ethylbenzene, xylene, and 1,3-butadiene were also taken into account in the index proposed, because of their considerable health effects. Different weighting factors were then assigned to each pollutant according to its priority. Trapezoidal membership functions were employed for classifications and the final index consisted of 72 inference rules. To assess the performance of the index, a case study was carried out employing air quality data at five different sampling stations in Tehran, Iran, from January 2008 to December 2009, results of which were then compared to the results obtained from USEPA air quality index (AQI). According to the results from present study, fuzzy-based air quality index is a comprehensive tool for classification of air quality and tends to produce accurate results. Therefore, it can be considered useful, reliable, and suitable for consideration by local authorities in air quality assessment and management schemes. Fuzzy-based air quality index (FAQI).

  6. Assessing the quality of healthcare provided to children.

    PubMed Central

    Mangione-Smith, R; McGlynn, E A

    1998-01-01

    OBJECTIVE: To present a conceptual framework for evaluating quality of care for children and adolescents, summarize the key issues related to developing measures to assess pediatric quality of care, examine some existing measures, and present evidence about their current level of performance. PRINCIPAL FINDINGS: Assessing the quality of care for children poses many challenges not encountered when making these measurements in the adult population. Children and adolescents (from this point forward referred to collectively as children unless differentiation is necessary) differ from adults in two clinically important ways (Jameson and Wehr 1993): (1) their normal developmental trajectory is characterized by change, and (2) they have differential morbidity. These factors contribute to the limitations encountered when developing measures to assess the quality of care for children. The movement of a child through the various stages of development makes it difficult to establish what constitutes a "normal" outcome and by extension what constitutes a poor outcome. Additionally, salient developmental outcomes that result from poor quality of care may not be observed for several years. This implies that poor outcomes may be observed when the child is receiving care from a delivery system other than the one that provided the low-quality care. Attributing the suboptimal outcome to the new delivery system would be inappropriate. Differential morbidity refers to the fact that the type, prevalence, and severity of illness experienced by children is measurably different from that observed in adults. Most children experience numerous self-limited illness of mild severity. A minority of children suffer from markedly more severe diseases. Thus, condition-specific measures in children are problematic to implement for routine assessments because of the extremely low incidence and prevalence of most severe pediatric diseases (Halfon 1996). However, children with these conditions are

  7. Integrating transcriptomics into triad-based soil-quality assessment.

    PubMed

    Chen, Guangquan; de Boer, Tjalf E; Wagelmans, Marlea; van Gestel, Cornelis A M; van Straalen, Nico M; Roelofs, Dick

    2014-04-01

    The present study examined how transcriptomics tools can be included in a triad-based soil-quality assessment to assess the toxicity of soils from riverbanks polluted by metals. To that end, the authors measured chemical soil properties and used the International Organization for Standardization guideline for ecotoxicological tests and a newly developed microarray for gene expression in the indicator soil arthropod Folsomia candida. Microarray analysis revealed that the oxidative stress response pathway was significantly affected in all soils except one. The data indicate that changes in cell redox homeostasis are a significant signature of metal stress. Finally, 32 genes showed significant dose-dependent expression with metal concentrations. They are promising genetic markers providing an early indication of the need for higher-tier testing of soil quality. During the bioassay, the toxicity of the least polluted soils could be removed by sterilization. The gene expression profile for this soil did not show a metal-related signature, confirming that a factor other than metals (most likely of biological origin) caused the toxicity. The present study demonstrates the feasibility and advantages of integrating transcriptomics into triad-based soil-quality assessment. Combining molecular and organismal life-history trait stress responses helps to identify causes of adverse effects in bioassays. Further validation is needed for verifying the set of genes with dose-dependent expression patterns linked with toxic stress. PMID:24382659

  8. Semen quality assessments and their significance in reproductive technology.

    PubMed

    Kordan, W; Fraser, L; Wysocki, P; Strzezek, R; Lecewicz, M; Mogielnicka-Brzozowska, M; Dziekońska, A; Soliwoda, D; Koziorowska-Gilun, M

    2013-01-01

    Semen quality assessment methods are very important in predicting the fertilizing ability of persevered spermatozoa and to improve animal reproductive technology. This review discusses some of the current laboratory methods used for semen quality assessments, with references to their relevance in the evaluation of male fertility and semen preservation technologies. Semen quality assessment methods include sperm motility evaluations, analyzed with the computer-assisted semen analysis (CASA) system, and plasma membrane integrity evaluations using fluorescent stains, such as Hoechst 33258 (H33258), SYBR-14, propidium iodide (PI), ethidium homodimer (EthD) and 6-carboxyfluorescein diacetate (CFDA), and biochemical tests, such as the measurement of malondialdehyde (MDA) level. This review addresses the significance of specific fluorochromes and ATP measurements for the evaluation of the sperm mitochondrial status. Laboratory methods used for the evaluation of chromatin status, DNA integrity, and apoptotic changes in spermatozoa have been discussed. Special emphasis has been focused on the application of proteomic techniques, such as two-dimensional (2-D) gel electrophoresis and liquid chromatography mass spectrometry (LC-MS/MS), for the identification of the properties and functions of seminal plasma proteins in order to define their role in the fertilization-related processes. PMID:24597323

  9. A cloud model-based approach for water quality assessment.

    PubMed

    Wang, Dong; Liu, Dengfeng; Ding, Hao; Singh, Vijay P; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun

    2016-07-01

    Water quality assessment entails essentially a multi-criteria decision-making process accounting for qualitative and quantitative uncertainties and their transformation. Considering uncertainties of randomness and fuzziness in water quality evaluation, a cloud model-based assessment approach is proposed. The cognitive cloud model, derived from information science, can realize the transformation between qualitative concept and quantitative data, based on probability and statistics and fuzzy set theory. When applying the cloud model to practical assessment, three technical issues are considered before the development of a complete cloud model-based approach: (1) bilateral boundary formula with nonlinear boundary regression for parameter estimation, (2) hybrid entropy-analytic hierarchy process technique for calculation of weights, and (3) mean of repeated simulations for determining the degree of final certainty. The cloud model-based approach is tested by evaluating the eutrophication status of 12 typical lakes and reservoirs in China and comparing with other four methods, which are Scoring Index method, Variable Fuzzy Sets method, Hybrid Fuzzy and Optimal model, and Neural Networks method. The proposed approach yields information concerning membership for each water quality status which leads to the final status. The approach is found to be representative of other alternative methods and accurate. PMID:26995351

  10. Assessing water quality trends in catchments with contrasting hydrological regimes

    NASA Astrophysics Data System (ADS)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  11. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  12. Microphone array power ratio for quality assessment of reverberated speech

    NASA Astrophysics Data System (ADS)

    Berkun, Reuven; Cohen, Israel

    2015-12-01

    Speech signals in enclosed environments are often distorted by reverberation and noise. In speech communication systems with several randomly distributed microphones, involving a dynamic speaker and unknown source location, it is of great interest to monitor the perceived quality at each microphone and select the signal with the best quality. Most of existing approaches for quality estimation require prior information or a clean reference signal, which is unfortunately seldom available. In this paper, a practical non-intrusive method for quality assessment of reverberated speech signals is proposed. Using a statistical model of the reverberation process, we examine the energies as measured by unidirectional elements in a microphone array. By measuring the power ratio, we obtain a measure for the amount of reverberation in the received acoustic signals. This measure is then utilized to derive a blind estimation of the direct-to-reverberation energy ratio in the room. The proposed approach attains a simple, reliable, and robust quality measure, shown here through persuasive simulation results.

  13. Automatic Prostate Tracking and Motion Assessment in Volumetric Modulated Arc Therapy With an Electronic Portal Imaging Device

    SciTech Connect

    Azcona, Juan Diego; Li, Ruijiang; Mok, Edward; Hancock, Steven; Xing, Lei

    2013-07-15

    Purpose: To assess the prostate intrafraction motion in volumetric modulated arc therapy treatments using cine megavoltage (MV) images acquired with an electronic portal imaging device (EPID). Methods and Materials: Ten prostate cancer patients were treated with volumetric modulated arc therapy using a Varian TrueBeam linear accelerator equipped with an EPID for acquiring cine MV images during treatment. Cine MV images acquisition was scheduled for single or multiple treatment fractions (between 1 and 8). A novel automatic fiducial detection algorithm that can handle irregular multileaf collimator apertures, field edges, fast leaf and gantry movement, and MV image noise and artifacts in patient anatomy was used. All sets of images (approximately 25,000 images in total) were analyzed to measure the positioning accuracy of implanted fiducial markers and assess the prostate movement. Results: Prostate motion can vary greatly in magnitude among different patients. Different motion patterns were identified, showing its unpredictability. The mean displacement and standard deviation of the intrafraction motion was generally less than 2.0 ± 2.0 mm in each of the spatial directions. In certain patients, however, the percentage of the treatment time in which the prostate is displaced more than 5 mm from its planned position in at least 1 spatial direction was 10% or more. The maximum prostate displacement observed was 13.3 mm. Conclusion: Prostate tracking and motion assessment was performed with MV imaging and an EPID. The amount of prostate motion observed suggests that patients will benefit from its real-time monitoring. Megavoltage imaging can provide the basis for real-time prostate tracking using conventional linear accelerators.

  14. Quality Assessment of Collection 6 MODIS Atmospheric Science Products

    NASA Astrophysics Data System (ADS)

    Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.

    2015-12-01

    Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the

  15. Soil bioassays as tools for sludge compost quality assessment

    SciTech Connect

    Domene, Xavier; Sola, Laura; Ramirez, Wilson; Alcaniz, Josep M.; Andres, Pilar

    2011-03-15

    Composting is a waste management technology that is becoming more widespread as a response to the increasing production of sewage sludge and the pressure for its reuse in soil. In this study, different bioassays (plant germination, earthworm survival, biomass and reproduction, and collembolan survival and reproduction) were assessed for their usefulness in the compost quality assessment. Compost samples, from two different composting plants, were taken along the composting process, which were characterized and submitted to bioassays (plant germination and collembolan and earthworm performance). Results from our study indicate that the noxious effects of some of the compost samples observed in bioassays are related to the low organic matter stability of composts and the enhanced release of decomposition endproducts, with the exception of earthworms, which are favored. Plant germination and collembolan reproduction inhibition was generally associated with uncomposted sludge, while earthworm total biomass and reproduction were enhanced by these materials. On the other hand, earthworm and collembolan survival were unaffected by the degree of composting of the wastes. However, this pattern was clear in one of the composting procedures assessed, but less in the other, where the release of decomposition endproducts was lower due to its higher stability, indicating the sensitivity and usefulness of bioassays for the quality assessment of composts.

  16. Individualized assessment of quality of life in idiopathic Parkinson's disease.

    PubMed

    Lee, Mark A; Walker, Richard W; Hildreth, Anthony J; Prentice, Wendy M

    2006-11-01

    The purpose of this study was to assess quality of life (QoL) of patients with idiopathic Parkinson's disease (IPD). The Parkinson's Disease Questionnaire (PDQ-39) was compared with an individualized QoL tool: the Schedule for Evaluation of Individual Quality of Life Direct Weighting (SEIQoL-DW). One hundred twenty-three patients underwent interviews using these tools, together with the Mini Mental State examination, Beck Depression Inventory, a qualitative pain assessment, and the Palliative Care Assessment tool (for symptoms). The SEIQoL-DW was well tolerated and demonstrated that QoL not only was broad and highly individualistic but also was determined more by psychosocial than physical issues. Of the 87 domains nominated by patients, the most common were family (87.8%), health (52.8%), leisure activities (36.6%), marriage (35%), and friends (30.9%). The SEIQoL index score was predicted by depression but not by disease stage. However, the PDQ-39 was predicted by disease stage, the number of symptoms, and depression. Direct comparison of the tools confirmed that the SEIQoL index score was predicted by the PDQ-39 domains of social support, cognitive impairment, and emotion. The use of the SEIQoL-DW challenges current thinking within IPD research regarding QoL and its assessment using the PDQ-39. PMID:16986143

  17. An unsupervised method for quality assessment of despeckling: an evaluation on COSMO-SkyMed data

    NASA Astrophysics Data System (ADS)

    Aiazzi, B.; Alparone, L.; Argenti, F.; Baronti, S.; Bianchi, T.; Lapini, A.

    2011-11-01

    Goal of this paper is the development and evaluation of a fully automatic method for quality assessment of despeckled synthetic aperture radar (SAR) images. The rationale of the new approach is that any structural perturbation introduced by despeckling, e.g. a local bias of mean or the blur of a sharp edge or the suppression of a point target, may be regarded either as the introduction of a new structure or as the suppression of an existing one. Conversely, plain removal of random noise does not change structures in the image. Structures are identified as clusters in the scatterplot of original to filtered image. Ideal filtering should produce clusters all aligned along the main diagonal. In practice clusters are moved far from the diagonal. Clusters' centers are detected through the mean shift algorithm. A structural change feature is defined at each pixel from the position and population of off-diagonal cluster, according to Shannon's information theoretic concepts. Results on true SAR images (COSMO-SkyMed) will be presented. Bayesian estimators (LMMSE: liner minimum mean squared error: MAP: maximum a-posteriori probability) operating in the undecimated wavelet domain have been coupled with segment-based processing. Quality measurements of despeckled SAR images carried out by means of the proposed method highlight the benefits of segmented MAP filtering.

  18. Design of the National Water-Quality Assessment Program; occurrence and distribution of water-quality conditions

    USGS Publications Warehouse

    Gilliom, Robert J.; Alley, William M.; Gurtz, Martin E.

    1995-01-01

    The National Water-Quality Assessment Program assesses the status of and trends in the quality of the Nation's ground- and surface-water resources. The occurrence and distribution assessment component characterizes broad-scale water-quality conditions in relation to major contaminant sources and background conditions in each study area. The surface-water design focuses on streams. The ground-water design focuses on major aquifers, with emphasis on recently recharged ground water associated with human activities.

  19. A review of image quality assessment methods with application to computational photography

    NASA Astrophysics Data System (ADS)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  20. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  1. Automatic exposure control in multichannel CT with tube current modulation to achieve a constant level of image noise: Experimental assessment on pediatric phantoms

    SciTech Connect

    Brisse, Herve J.; Madec, Ludovic; Gaboriaud, Genevieve; Lemoine, Thomas; Savignoni, Alexia; Neuenschwander, Sylvia; Aubert, Bernard; Rosenwald, Jean-Claude

    2007-07-15

    Automatic exposure control (AEC) systems have been developed by computed tomography (CT) manufacturers to improve the consistency of image quality among patients and to control the absorbed dose. Since a multichannel helical CT scan may easily increase individual radiation doses, this technical improvement is of special interest in children who are particularly sensitive to ionizing radiation, but little information is currently available regarding the precise performance of these systems on small patients. Our objective was to assess an AEC system on pediatric dose phantoms by studying the impact of phantom transmission and acquisition parameters on tube current modulation, on the resulting absorbed dose and on image quality. We used a four-channel CT scan working with a patient-size and z-axis-based AEC system designed to achieve a constant noise within the reconstructed images by automatically adjusting the tube current during acquisition. The study was performed with six cylindrical poly(methylmethacrylate) (PMMA) phantoms of variable diameters (10-32 cm) and one 5 years of age equivalent pediatric anthropomorphic phantom. After a single scan projection radiograph (SPR), helical acquisitions were performed and images were reconstructed with a standard convolution kernel. Tube current modulation was studied with variable SPR settings (tube angle, mA, kVp) and helical parameters (6-20 HU noise indices, 80-140 kVp tube potential, 0.8-4 s. tube rotation time, 5-20 mm x-ray beam thickness, 0.75-1.5 pitch, 1.25-10 mm image thickness, variable acquisition, and reconstruction fields of view). CT dose indices (CTDIvol) were measured, and the image quality criterion used was the standard deviation of the CT number measured in reconstructed images of PMMA material. Observed tube current levels were compared to the expected values from Brooks and Di Chiro's [R.A. Brooks and G.D. Chiro, Med. Phys. 3, 237-240 (1976)] model and calculated values (product of a reference value

  2. A Novel Image Quality Assessment With Globally and Locally Consilient Visual Quality Perception.

    PubMed

    Bae, Sung-Ho; Kim, Munchurl

    2016-05-01

    Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of a human visual system (HVS) for visual quality perception. In this paper, we first reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called the structural contrast-quality index (SC-QI), by adopting a structural contrast index (SCI), which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM), which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared with other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared with the state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa. PMID:27046873

  3. Meat Quality Assessment by Electronic Nose (Machine Olfaction Technology)

    PubMed Central

    Ghasemi-Varnamkhasti, Mahdi; Mohtasebi, Seyed Saeid; Siadat, Maryam; Balasubramanian, Sundar

    2009-01-01

    Over the last twenty years, newly developed chemical sensor systems (so called “electronic noses”) have made odor analyses possible. These systems involve various types of electronic chemical gas sensors with partial specificity, as well as suitable statistical methods enabling the recognition of complex odors. As commercial instruments have become available, a substantial increase in research into the application of electronic noses in the evaluation of volatile compounds in food, cosmetic and other items of everyday life is observed. At present, the commercial gas sensor technologies comprise metal oxide semiconductors, metal oxide semiconductor field effect transistors, organic conducting polymers, and piezoelectric crystal sensors. Further sensors based on fibreoptic, electrochemical and bi-metal principles are still in the developmental stage. Statistical analysis techniques range from simple graphical evaluation to multivariate analysis such as artificial neural network and radial basis function. The introduction of electronic noses into the area of food is envisaged for quality control, process monitoring, freshness evaluation, shelf-life investigation and authenticity assessment. Considerable work has already been carried out on meat, grains, coffee, mushrooms, cheese, sugar, fish, beer and other beverages, as well as on the odor quality evaluation of food packaging material. This paper describes the applications of these systems for meat quality assessment, where fast detection methods are essential for appropriate product management. The results suggest the possibility of using this new technology in meat handling. PMID:22454572

  4. Terrestrial Method for Airborne Lidar Quality Control and Assessment

    NASA Astrophysics Data System (ADS)

    Alsubaie, N. M.; Badawy, H. M.; Elhabiby, M. M.; El-Sheimy, N.

    2014-11-01

    Most of LiDAR systems do not provide the end user with the calibration and acquisition procedures that can use to validate the quality of the data acquired by the airborne system. Therefore, this system needs data Quality Control (QC) and assessment procedures to verify the accuracy of the laser footprints and mainly at building edges. This research paper introduces an efficient method for validating the quality of the airborne LiDAR point clouds data using terrestrial laser scanning data integrated with edge detection techniques. This method will be based on detecting the edge of buildings from these two independent systems. Hence, the building edges are extracted from the airborne data using an algorithm that is based on the standard deviation of neighbour point's height from certain threshold with respect to centre points using radius threshold. The algorithm is adaptive to different point densities. The approach is combined with another innovative edge detection technique from terrestrial laser scanning point clouds that is based on the height and point density constraints. Finally, statistical analysis and assessment will be applied to compare these two systems in term of edge detection extraction precision, which will be a priori step for 3D city modelling generated from heterogeneous LiDAR systems

  5. An assessment of drinking-water quality post-Haiyan

    PubMed Central

    Anarna, Maria Sonabel; Fernando, Arturo

    2015-01-01

    Introduction Access to safe drinking-water is one of the most important public health concerns in an emergency setting. This descriptive study reports on an assessment of water quality in drinking-water supply systems in areas affected by Typhoon Haiyan immediately following and 10 months after the typhoon. Methods Water quality testing and risk assessments of the drinking-water systems were conducted three weeks and 10 months post-Haiyan. Portable test kits were used to determine the presence of Escherichia coli and the level of residual chlorine in water samples. The level of risk was fed back to the water operators for their action. Results Of the 121 water samples collected three weeks post-Haiyan, 44% were contaminated, while 65% (244/373) of samples were found positive for E. coli 10 months post-Haiyan. For the three components of drinking-water systems – source, storage and distribution – the proportions of contaminated systems were 70%, 67% and 57%, respectively, 10 months after Haiyan. Discussion Vulnerability to faecal contamination was attributed to weak water safety programmes in the drinking-water supply systems. Poor water quality can be prevented or reduced by developing and implementing a water safety plan for the systems. This, in turn, will help prevent waterborne disease outbreaks caused by contaminated water post-disaster. PMID:26767136

  6. Subjective Quality Assessment of Underwater Video for Scientific Applications

    PubMed Central

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-01-01

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400

  7. [Soil quality assessment of forest stand in different plantation esosystems].

    PubMed

    Huang, Yu; Wang, Silong; Feng, Zongwei; Gao, Hong; Wang, Qingkui; Hu, Yalin; Yan, Shaokui

    2004-12-01

    After a clear-cutting of the first generation Cunninghamia lanceolata plantation in 1982, three plantation ecosystems, pure Michelia macclurei stand (PMS), pure Chinese-fir stand (PCS) and their mixed stand, were established in spring 1983, and their effects on soil characteristics were evaluated by measuring some soil physical, chemical, microbiological and biochemical parameters. After 20 years' plantation, all test indices showed differences among different forest management models. Both PMS and MCM had a favorable effect on soil fertility maintenance. Soil quality assessment showed that some soil functions, e.g., water availability, nutrient availability, root suitability and soil quality index were all in a moderate level under the mixed and pure PMS stands, whereas in a relatively lower level under successive PCS stand. The results also showed that there existed close correlations between soil total organic C (TOC), cation exchange capacity (CEC), microbial biomass-C (Cmic) and other soil physical, chemical and biological indices. Therefore, TOC, CEC and Cmic could be used as the indicators in assessing soil quality in this study area. In addition, there were also positive correlations between soil microbial biomass-C and TOC, soil microbial biomass-N and total N, and soil microbial biomass-P and total P in the present study. PMID:15825426

  8. Assessing prosocial message effectiveness: effects of message quality, production quality, and persuasiveness.

    PubMed

    Austin, E W; Pinkleton, B; Fujioka, Y

    1999-01-01

    The purpose of this study was to determine whether the effectiveness of prosocial messages is compromised by poor design. A receiver-oriented content analysis (N = 246) was used to assess college students' perceptions of the message quality, production quality, and persuasiveness of advertisements and prosocial advertisements regarding alcohol. After providing background information, respondents rated a series of video clips on a variety of criteria guided by the Message Interpretation Process (MIP) model. Results indicated that prosocial advertisements were rated as higher in quality than were commercial advertisements overall and on logic-based criteria, but prosocial advertisements nevertheless had weaker relationships to viewers' beliefs and reported behaviors relevant to drinking alcohol. Heavier drinkers rated commercial advertisements more positively than did lighter/nondrinkers. They were less skeptical of persuasive messages and rated prosocial advertisements lower in effectiveness and commercial advertisements higher in effectiveness. PMID:10977288

  9. Automatic Spiral Analysis for Objective Assessment of Motor Symptoms in Parkinson’s Disease

    PubMed Central

    Memedi, Mevludin; Sadikov, Aleksander; Groznik, Vida; Žabkar, Jure; Možina, Martin; Bergquist, Filip; Johansson, Anders; Haubenberger, Dietrich; Nyholm, Dag

    2015-01-01

    A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good

  10. Automatic Spiral Analysis for Objective Assessment of Motor Symptoms in Parkinson's Disease.

    PubMed

    Memedi, Mevludin; Sadikov, Aleksander; Groznik, Vida; Žabkar, Jure; Možina, Martin; Bergquist, Filip; Johansson, Anders; Haubenberger, Dietrich; Nyholm, Dag

    2015-01-01

    A challenge for the clinical management of advanced Parkinson's disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good

  11. Development and initial evaluation of a semi-automatic approach to assess perivascular spaces on conventional magnetic resonance images

    PubMed Central

    Wang, Xin; Valdés Hernández, Maria del C.; Doubal, Fergus; Chappell, Francesca M.; Piper, Rory J.; Deary, Ian J.; Wardlaw, Joanna M.

    2016-01-01

    Purpose Perivascular spaces (PVS) are associated with ageing, cerebral small vessel disease, inflammation and increased blood brain barrier permeability. Most studies to date use visual rating scales to assess PVS, but these are prone to observer variation. Methods We developed a semi-automatic computational method that extracts PVS on bilateral ovoid basal ganglia (BG) regions on intensity-normalised T2-weighted magnetic resonance images. It uses Analyze™10.0 and was applied to 100 mild stroke patients’ datasets. We used linear regression to test association between BGPVS count, volume and visual rating scores; and between BGPVS count & volume, white matter hyperintensity (WMH) rating scores (periventricular: PVH; deep: DWMH) & volume, atrophy rating scores and brain volume. Results In the 100 patients WMH ranged from 0.4 to 119 ml, and total brain tissue volume from 0.65 to 1.45 l. BGPVS volume increased with BGPVS count (67.27, 95%CI [57.93 to 76.60], p < 0.001). BGPVS count was positively associated with WMH visual rating (PVH: 2.20, 95%CI [1.22 to 3.18], p < 0.001; DWMH: 1.92, 95%CI [0.99 to 2.85], p < 0.001), WMH volume (0.065, 95%CI [0.034 to 0.096], p < 0.001), and whole brain atrophy visual rating (1.01, 95%CI [0.49 to 1.53], p < 0.001). BGPVS count increased as brain volume (as % of ICV) decreased (−0.33, 95%CI [−0.53 to −0.13], p = 0.002). Comparison with existing method BGPVS count and volume increased with the overall increase of BGPVS visual scores (2.11, 95%CI [1.36 to 2.86] for count and 0.022, 95%CI [0.012 to 0.031] for volume, p < 0.001). Distributions for PVS count and visual scores were also similar. Conclusions This semi-automatic method is applicable to clinical protocols and offers quantitative surrogates for PVS load. It shows good agreement with a visual rating scale and confirmed that BGPVS are associated with WMH and atrophy measurements. PMID:26416614

  12. CBCT-based bone quality assessment: are Hounsfield units applicable?

    PubMed Central

    Jacobs, R; Singer, S R; Mupparapu, M

    2015-01-01

    CBCT is a widely applied imaging modality in dentistry. It enables the visualization of high-contrast structures of the oral region (bone, teeth, air cavities) at a high resolution. CBCT is now commonly used for the assessment of bone quality, primarily for pre-operative implant planning. Traditionally, bone quality parameters and classifications were primarily based on bone density, which could be estimated through the use of Hounsfield units derived from multidetector CT (MDCT) data sets. However, there are crucial differences between MDCT and CBCT, which complicates the use of quantitative gray values (GVs) for the latter. From experimental as well as clinical research, it can be seen that great variability of GVs can exist on CBCT images owing to various reasons that are inherently associated with this technique (i.e. the limited field size, relatively high amount of scattered radiation and limitations of currently applied reconstruction algorithms). Although attempts have been made to correct for GV variability, it can be postulated that the quantitative use of GVs in CBCT should be generally avoided at this time. In addition, recent research and clinical findings have shifted the paradigm of bone quality from a density-based analysis to a structural evaluation of the bone. The ever-improving image quality of CBCT allows it to display trabecular bone patterns, indicating that it may be possible to apply structural analysis methods that are commonly used in micro-CT and histology. PMID:25315442

  13. Subjective quality assessment of numerically reconstructed compressed holograms

    NASA Astrophysics Data System (ADS)

    Ahar, Ayyoub; Blinder, David; Bruylants, Tim; Schretter, Colas; Munteanu, Adrian; Schelkens, Peter

    2015-09-01

    Recently several papers reported efficient techniques to compress digital holograms. Typically, the rate-distortion performance of these solutions was evaluated by means of objective metrics such as Peak Signal-to-Noise Ratio (PSNR) or the Structural Similarity Index Measure (SSIM) by either evaluating the quality of the decoded hologram or the reconstructed compressed hologram. Seen the specific nature of holograms, it is relevant to question to what extend these metrics provide information on the effective visual quality of the reconstructed hologram. Given that today no holographic display technology is available that would allow for a proper subjective evaluation experiment, we propose in this paper a methodology that is based on assessing the quality of a reconstructed compressed hologram on a regular 2D display. In parallel, we also evaluate several coding engines, namely JPEG configured with the default perceptual quantization tables and with uniform quantization tables, JPEG 2000, JPEG 2000 extended with arbitrary packet decompositions and direction-adaptive filters and H.265/HEVC configured in intra-frame mode. The experimental results indicate that the perceived visual quality and the objective measures are well correlated. Moreover, also the superiority of the HEVC and the extended JPEG 2000 coding engines was confirmed, particularly at lower bitrates.

  14. Computerized quality assessment of phonocardiogram signal measurement-acquisition parameters.

    PubMed

    Naseri, H; Homaeinezhad, M R

    2012-08-01

    The major focus of this study is to describe and develop a phonocardiogram (PCG) signal measurement binary quality assessment (accept-reject) technique. The proposed algorithm is composed of three major stages: preprocessing, numerical-based quality measurement and advanced measurement subroutines. The preprocessing step includes normalization, wavelet-based threshold denoising and baseline wander removal. The numerical-based quality measurement routine includes two separate stages based on energy and level of noise of the PCG signal. The advanced quality measurement step is mainly based on the interval of S1 and S2 sounds. The proposed technique was applied to 400 2-min PCG signals gathered by volunteers with range of skills in PCG data acquisition from patients with different types of valve diseases from their 2R (aortic), 2L (pulmonic), 4R (apex) and 4L (tricuspid) positions by implementing an electronic stethoscope (3M Littmann(®) 3200, 4 kHz sampling frequency). The dataset was firstly annotated manually and then, by applying the proposed algorithm, an accuracy of 95.25% was achieved. PMID:22650759

  15. Storage stability and quality assessment of processed cereal brans.

    PubMed

    Sharma, Savita; Kaur, Satinder; Dar, B N; Singh, Baljit

    2014-03-01

    Quality improvement of cereal brans, a health promoting ingredient for functional foods is the emerging research concept due to their low shelf stability and presence of non-nutrient components. A study was conducted to evaluate the storage quality of processed milling industry byproducts so that these can be potentially utilized as a dietary fibre source. Different cereal brans (wheat, rice, barley and oat) were processed by dry, wet, microwave heating, extrusion cooking and chemical methods at variable conditions. Processed brans were stored in high density polyethylene (HDPE) pouches at ambient and refrigeration temperature. Quality assessments (moisture, free fatty acids, water activity and physical quality) of brans were done up to six months, at one month intervals. Free fatty acid content, moisture and water activity of the cereal brans remained stable during the entire storage period. Among treatments, extrusion processing is the most effective for stability. Processing treatments and storage temperature have the positive effect on extending the shelf life of all cereal brans. Therefore, processed cereal brans can be used as a dietary fortificant for the development of value added food products. PMID:24587536

  16. Assessment of river Po sediment quality by micropollutant analysis.

    PubMed

    Camusso, Marina; Galassi, Silvana; Vignati, Davide

    2002-05-01

    Trace metals, PCB congeners and DDT homologues were determined in composite sediment samples collected from 10 representative sites along the river Po in two separate seasons. The aim was to identify the most anthropogenically impacted areas for future monitoring programmes and to aid development of Italian sediment quality criteria. The surface samples were collected during low flow conditions. Trace metal concentrations were assayed by electrothermal (Cd, Co, Cr, Cu, Ni, Pb), flame (Fe, Mn, Zn) or hydride generation (As) atomic absorption spectrometry after microwave assisted acid digestion. Hg was determined on solid samples by automated analyser. Organic microcontaminants were determined by gas-chromatography with 63Ni electron capture detector after Soxhlet extraction. Concentrations of trace metals, total PCB and DDT homologues showed two distinct peaks at the sites immediately downstream of Turin and Milan, respectively, and in each case decreased progressively further downstream. Principal component analysis identified three major factors (from a multi-dimensional space of 35 variables) which explained 85-90% of the total observed variance. The first and second factors corresponded to anthropogenic inputs and geological factors on sediment quality; the third included seasonal processes of minor importance. Sediment quality assessment identified Cd, Cu, Hg, Pb, Zn and organic microcontaminants as posing the most serious threats to river sediment quality. A reference site within the Po basin provided useful background values. Moderate pollution by organochlorine compounds was ascribed both to local sources and to atmospheric deposition. PMID:12153015

  17. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  18. Weld quality assessment using an edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh

    2010-03-01

    Heat input during the welding process and subsequent re-cooling changes the microstructure, hardness, toughness, and cracking susceptibility in heat affected zone (HAZ). Weld quality of a weldment largely depends on the area of HAZ. Determination of exact area of the HAZ by manual stereological methods and conventional visual inspection techniques is a difficult task. These techniques of evaluation are based on approximating the complex shape of HAZ as combination of simplified shapes such as rectangles, triangles etc. In this paper, a filtering scheme based on morphology, thresholding and edge detection is implemented on image of weldments to assess quality of the weld. The HAZ of mild steel specimens welded at different welding parameters by Metal Active Gas/Gas Metal Arc Welding process were compared and presented.

  19. Weld quality assessment using an edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh

    2009-12-01

    Heat input during the welding process and subsequent re-cooling changes the microstructure, hardness, toughness, and cracking susceptibility in heat affected zone (HAZ). Weld quality of a weldment largely depends on the area of HAZ. Determination of exact area of the HAZ by manual stereological methods and conventional visual inspection techniques is a difficult task. These techniques of evaluation are based on approximating the complex shape of HAZ as combination of simplified shapes such as rectangles, triangles etc. In this paper, a filtering scheme based on morphology, thresholding and edge detection is implemented on image of weldments to assess quality of the weld. The HAZ of mild steel specimens welded at different welding parameters by Metal Active Gas/Gas Metal Arc Welding process were compared and presented.

  20. Imaging quality assessment of multi-modal miniature microscope.

    PubMed

    Lee, Junwon; Rogers, Jeremy; Descour, Michael; Hsu, Elizabeth; Aaron, Jesse; Sokolov, Konstantin; Richards-Kortum, Rebecca

    2003-06-16

    We are developing a multi-modal miniature microscope (4M device) to image morphology and cytochemistry in vivo and provide better delineation of tumors. The 4M device is designed to be a complete microscope on a chip, including optical, micro-mechanical, and electronic components. It has advantages such as compact size and capability for microscopic-scale imaging. This paper presents an optics-only prototype 4M device, the very first imaging system made of sol-gel material. The microoptics used in the 4M device has a diameter of 1.3 mm. Metrology of the imaging quality assessment of the prototype device is presented. We describe causes of imaging performance degradation in order to improve the fabrication process. We built a multi-modal imaging test-bed to measure first-order properties and to assess the imaging quality of the 4M device. The 4M prototype has a field of view of 290 microm in diameter, a magnification of -3.9, a working distance of 250 microm and a depth of field of 29.6+/-6 microm. We report the modulation transfer function (MTF) of the 4M device as a quantitative metric of imaging quality. Based on the MTF data, we calculated a Strehl ratio of 0.59. In order to investigate the cause of imaging quality degradation, the surface characterization of lenses in 4M devices is measured and reported. We also imaged both polystyrene microspheres similar in size to epithelial cell nuclei and cervical cancer cells. Imaging results indicate that the 4M prototype can resolve cellular detail necessary for detection of precancer. PMID:19466016

  1. Assessment of selected ground-water-quality data in Montana

    SciTech Connect

    Davis, R.E.; Rogers, G.D.

    1984-09-01

    This study was conducted to assess the existing, computer-accessible, ground-water-quality data for Montana. All known sources of ground-water-quality data were reviewed. Although the estimated number of analyses exceeds 25,000, more than three-fourths of the data were not suitable for this study. The only data used were obtained from the National Water Data Storage and Retrieval System (WATSTORE) of the US Geological Survey, because the chemical analyses generally are complete, have an assigned geohydrologic unit or source of water, and are accessible by computer. The data were assessed by geographic region of the State because of climatic and geologic differences. These regions consist of the eastern plains region and the western mountainous region. Within each region, the data were assessed according to geohydrologic unit. The number and areal distribution of data sites for some groupings of units are inadequate to be representative, particularly for groupings stratigraphically below the Upper Cretaceous Fox Hills Sandstone and Hell Creek Formation in the eastern region and for Quaternary alluvium, terrace deposits, glacial deposits, and associated units in the western region. More than one-half the data for the entire State are for the Tertiary Wasatch, Fort Union, and associated units in the eastern region. The results of statistical analyses of data in WATSTORE indicate that the median dissolved-solids concentration for the groupings of geohydrologic units ranges from about 400 to 5000 milligrams per liter in the eastern region and from about 100 to 200 milligrams per liter in the western region. Concentrations of most trace constituents do not exceed the primary drinking-water standards of the US Environmental Protection Agency. The data in WATSTORE for organic constituents presently are inadequate to detect any organic effects of man's activities on ground-water quality. 26 figs., 79 tabs.

  2. Assessing the quality of topography from stereo-photoclinometry

    NASA Astrophysics Data System (ADS)

    Barnouin, O.; Gaskell, R.; Kahn, E.; Ernst, C.; Daly, M.; Bierhaus, E.; Johnson, C.; Clark, B.; Lauretta, D.

    2014-07-01

    Stereo-photoclinometry (SPC) has been used extensively to determine the shape and topography of various asteroids from image data. This technique will be used as one of two main approaches for determining the shape and topography of the asteroid Bennu, the target of the Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission. The purpose of this study is to evaluate the quality of SPC products derived from the Near-Earth Asteroid Rendezvous (NEAR) mission, whose suite of imaging data resembles that to be collected by OSIRIS-REx. We make use of the NEAR laser range-finder (NLR) to independently assess SPC's accuracy and precision.

  3. Quality Assessment of Vertical Angular Deviations for Photometer Calibration Benches

    NASA Astrophysics Data System (ADS)

    Silva Ribeiro, A.; Costa Santos, A.; Alves Sousa, J.; Forbes, A. B.

    2015-02-01

    Lighting, both natural and electric, constitutes one of the most important aspects of the life of human beings, allowing us to see and perform our daily tasks in outdoor and indoor environments. The safety aspects of lighting are self-evident in areas such as road lighting, urban lighting and also indoor lighting. The use of photometers to measure lighting levels requires traceability obtained in accredited laboratories, which must provide an associated uncertainty. It is therefore relevant to study the impact of known uncertainty sources like the vertical angular deviation of photometer calibration benches, in order to define criteria to its quality assessment.

  4. Watershed-Scale Soil Quality Assessment: Assessing Reasons for Poor Canopy Development in Corn

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil quality assessment is a critical component in understanding the long-term effects of soil and crop management practices within agricultural watersheds. In the South Fork of the Iowa River Watershed, an aerial survey was conducted during the summer of 2006, and fields that were planted to corn a...

  5. Quality Assessment of University Studies as a Service: Dimensions and Criteria

    ERIC Educational Resources Information Center

    Pukelyte, Rasa

    2010-01-01

    This article reviews a possibility to assess university studies as a service. University studies have to be of high quality both in their content and in the administrative level. Therefore, quality of studies as a service is an important constituent part of study quality assurance. When assessing quality of university studies as a service, it is…

  6. Spatial assessment of groundwater quality based on minor ions

    NASA Astrophysics Data System (ADS)

    Karthikeyan, B.; Elango, L.

    2011-12-01

    Use of water for domestic, agricultural and industrial purpose depends on the desirable range of concentration of various ions. As the suitability of groundwater for different use depends on concentration of several ions, delineation of a region having groundwater of suitable quality relies on integrating the quality of groundwater with respect to each ion. This can be brought out with the aid of advanced tools such as GIS (geographical information system). This study was carried out with the objective of assessing the groundwater quality based on EC (electrical conductivity), fluoride, bromide and nitrate using GIS techniques and the regions requiring attention for groundwater treatment was identified in a part of Nalgonda district, Andhra Pradesh, southern India. Forty five groundwater samples were collected and their EC, fluoride, nitrate and bromide concentration was analysed. Groundwater was not suitable for consumption in 6.6% of the samples based on EC. Fluoride, nitrate and bromide concentration in groundwater was not permissible as per BIS and WHO standards in 57%, 22% and 11% of the groundwater samples respectively. The areas having groundwater suitable or unsuitable for domestic use was delineated using GIS. The groundwater samples collected from 69% of the locations exceeded the desirable limit for drinking for atleast one parameter. The groundwater was unsuitable for domestic use in the northeastern and southeastern parts of this area. The source for the concentration of these parameters exceeding the limit is different for each parameter. Hence it is important to take a suitable collective measure in improving the groundwater quality. Considering the various options available for redeeming the groundwater quality, artificial recharge of groundwater by rainwater harvesting will be suitable to reduce the concentration of all ions in this area.

  7. [External quality assessment for clinical microbiology and good laboratory management].

    PubMed

    Kumasaka, K

    1998-02-01

    The Tokyo Metropolitan external quality assessment (EQA) program has revealed some serious problems in private independent microbiology laboratories in Tokyo since 1982. The poor performance in the EQA surveys closely related to poor laboratory managements, the type of training, experience of the medical technologists or technicians, and supervisory ability of the consultant physicians in independent laboratories. Social factors impede the reform of the quality assurance of clinical microbiology. Such factors include poor infrastructure of continuing education for small private laboratories, closure of the central clinical laboratories in the hospitals and outsourcing of laboratory tests due to restructuring in response to economic problems, and limited numbers of certified clinical pathologists of the Japan Society of Clinical Pathology (JSCP). Therefore, the Tokyo Metropolitan EQA Scheme is still confidential and its main role is educational. Good two way communication between participants and the organizers' clinical pathologists is essential, if the quality of laboratory tests is to be improved. The new JSCP edition of the postgraduate training requirement in clinical pathology includes "Laboratory Administration and Management". Good laboratory management(GLM) is an increasingly important component of good laboratory practice. The practice activities of clinical pathologists must include general management in addition to exercising there specialized knowledge in medicine and technology. Whereas leadership of a good clinical pathologist provides the direction of where a good laboratory is going, good management provides the steps of how to get there. And I believe quality system models from business and industry may provide us with strong guidance to build a quality system for the good laboratory that will endure into the next century. PMID:9528335

  8. Use of Landsat data to assess waterfowl habitat quality

    USGS Publications Warehouse

    Colwell, J.E.; Gilmer, D.S.; Work, E.A., Jr.; Rebel, D.

    1978-01-01

    This report is a discussion of the feasibility of using Landsat data to generate information of value for effective management of migratory waterfowl. Effective management of waterfowl includes regulating waterfowl populations through hunting regulations and habitat management. This report examines the ability to analyze annual production by monitoring the number of breeding and brood ponds that are present, and the ability to assess waterfowl habitat based on the various relationships between ponds and the surrounding upland terrain types. The basic conclusions of this report are that: 1) Landsat data can be used to improve estimates of pond numbers which may be correlated with duck production; and 2) Landsat data can be used to generate information on terrain types which subsequently can be used to assess relative waterfowl habitat quality.

  9. Use of Thematic Mapper for water quality assessment

    NASA Technical Reports Server (NTRS)

    Horn, E. M.; Morrissey, L. A.

    1984-01-01

    The evaluation of simulated TM data obtained on an ER-2 aircraft at twenty-five predesignated sample sites for mapping water quality factors such as conductivity, pH, suspended solids, turbidity, temperature, and depth, is discussed. Using a multiple regression for the seven TM bands, an equation is developed for the suspended solids. TM bands 1, 2, 3, 4, and 6 are used with logarithm conductivity in a multiple regression. The assessment of regression equations for a high coefficient of determination (R-squared) and statistical significance is considered. Confidence intervals about the mean regression point are calculated in order to assess the robustness of the regressions used for mapping conductivity, turbidity, and suspended solids, and by regressing random subsamples of sites and comparing the resultant range of R-squared, cross validation is conducted.

  10. Assessment of porous asphalt pavement performance: hydraulics and water quality

    NASA Astrophysics Data System (ADS)

    Briggs, J. F.; Ballestero, T. P.; Roseen, R. M.; Houle, J. J.

    2005-05-01

    The objective of this study is to focus on the water quality treatment and hydraulic performance of a porous asphalt pavement parking lot in Durham, New Hampshire. The site was constructed in October 2004 to assess the suitability of porous asphalt pavement for stormwater management in cold climates. The facility consists of a 4-inch asphalt open-graded friction course layer overlying a high porosity sand and gravel base. This base serves as a storage reservoir in-between storms that can slowly infiltrate groundwater. Details on the design, construction, and cost of the facility will be presented. The porous asphalt pavements is qualitatively monitored for signs of distress, especially those due to cold climate stresses like plowing, sanding, salting, and freeze-thaw cycles. Life cycle predictions are discussed. Surface infiltration rates are measured with a constant head device built specifically to test high infiltration capacity pavements. The test measures infiltration rates in a single 4-inch diameter column temporarily sealed to the pavement at its base. A surface inundation test, as described by Bean, is also conducted as a basis for comparison of results (Bean, 2004). These tests assess infiltration rates soon after installation, throughout the winter, during snowmelt, after a winter of salting, sanding, and plowing, and after vacuuming in the spring. Frost penetration into the subsurface reservoir is monitored with a frost gauge. Hydrologic effects of the system are evaluated. Water levels are monitored in the facility and in surrounding wells with continuously logging pressure transducers. The 6-inch underdrain pipe that conveys excess water in the subsurface reservoir to a riprap pad is also continuously monitored for flow. Since porous asphalt pavement systems infiltrate surface water into the subsurface, it is important to assess whether water quality treatment performance in the subsurface reservoir is adequate. The assumed influent water quality is

  11. Agreement in Quality of Life Assessment between Adolescents with Intellectual Disability and Their Parents

    ERIC Educational Resources Information Center

    Golubovic, Spela; Skrbic, Renata

    2013-01-01

    Intellectual disability affects different aspects of functioning and quality of life, as well as the ability to independently assess the quality of life itself. The paper examines the agreement in the quality of life assessments made by adolescents with intellectual disability and their parents compared with assessments made by adolescents without…

  12. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Quality assessment and performance improvement... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Measurement and Improvement Standards § 438.240 Quality assessment and...

  13. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the health and safety of a PACE participant. (b) Quality assessment and performance improvement...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460... 42 Public Health 4 2010-10-01 2010-10-01 false Internal quality assessment and...

  14. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the health and safety of a PACE participant. (b) Quality assessment and performance improvement...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460... 42 Public Health 4 2012-10-01 2012-10-01 false Internal quality assessment and...

  15. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the health and safety of a PACE participant. (b) Quality assessment and performance improvement...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460... 42 Public Health 4 2014-10-01 2014-10-01 false Internal quality assessment and...

  16. 42 CFR 460.136 - Internal quality assessment and performance improvement activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the health and safety of a PACE participant. (b) Quality assessment and performance improvement...) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460... 42 Public Health 4 2013-10-01 2013-10-01 false Internal quality assessment and...

  17. Integration of semi-automatic detection and sediment connectivity assessment for the characterization of sediment source areas in mountain catchments

    NASA Astrophysics Data System (ADS)

    Crema, Stefano; Bossi, Giulia; Marchi, Lorenzo; Cavalli, Marco

    2016-04-01

    Identifying areas that are directly delivering sediment to the channel network or to a catchment outlet is of great importance for a sound sediment dynamic characterization and for assessing sediment budget. We present an integration of remote sensing analysis techniques to characterize the effective sediment contributing area that is the sub-portion of the catchment in which sediment is effectively routed towards the catchment outlet. A semi-automatic mapping of active sediment source areas is carried out via image analysis techniques. To this purpose, satellite multispectral images and aerial orthophotos are considered for the analysis. Several algorithms for features extraction are applied and the maps obtained are compared with an expert-based sediment source mapping derived from photointerpretation and field surveys. The image-based analysis is additionally integrated with a topography-driven filtering procedure. Thanks to the availability of High-Resolution, LiDAR-derived Digital Terrain Models, it is possible to work at a fine scale and to compute morphometric parameters (e.g., slope, roughness, curvature) suitable for refining the image analysis. In particular, information on local topography was integrated with the image-based analysis to discriminate between rocky outcrops and sediment sources, thus improving the overall consistency of the procedure. The sediment source areas are then combined with the output of a connectivity assessment. A topography-based index of sediment connectivity is computed for the analyzed areas in order to better estimate the effective sediment contributing area and to obtain a ranking of the source areas in the studied catchments. The study methods have been applied in catchments of the Eastern Italian Alps where a detailed census of sediment source areas is available. The comparison of the results of image analysis with expert-based sediment sources mapping shows a satisfactory agreement between the two approaches

  18. Transcription factor motif quality assessment requires systematic comparative analysis

    PubMed Central

    Kibet, Caleb Kipkurui; Machanick, Philip

    2016-01-01

    Transcription factor (TF) binding site prediction remains a challenge in gene regulatory research due to degeneracy and potential variability in binding sites in the genome. Dozens of algorithms designed to learn binding models (motifs) have generated many motifs available in research papers with a subset making it to databases like JASPAR, UniPROBE and Transfac. The presence of many versions of motifs from the various databases for a single TF and the lack of a standardized assessment technique makes it difficult for biologists to make an appropriate choice of binding model and for algorithm developers to benchmark, test and improve on their models. In this study, we review and evaluate the approaches in use, highlight differences and demonstrate the difficulty of defining a standardized motif assessment approach. We review scoring functions, motif length, test data and the type of performance metrics used in prior studies as some of the factors that influence the outcome of a motif assessment. We show that the scoring functions and statistics used in motif assessment influence ranking of motifs in a TF-specific manner. We also show that TF binding specificity can vary by source of genomic binding data. We also demonstrate that information content of a motif is not in isolation a measure of motif quality but is influenced by TF binding behaviour. We conclude that there is a need for an easy-to-use tool that presents all available evidence for a comparative analysis. PMID:27092243

  19. Assessment of Functional Status and Quality of Life in Claudication

    PubMed Central

    Mays, Ryan J.; Casserly, Ivan P.; Kohrt, Wendy M.; Ho, P. Michael; Hiatt, William R.; Nehler, Mark R.; Regensteiner, Judith G.

    2012-01-01

    Background Treadmill walking is commonly used to evaluate walking impairment and efficacy of treatment for intermittent claudication (IC) in clinical and research settings. Although this is an important measure, it does not provide information about how patients perceive the effects of their treatments on more global measures of health-related quality of life (HRQOL). Methods PubMed/Medline was searched to find publications about the most commonly used questionnaires to assess functional status and/or general and disease-specific HRQOL in patients with PAD who experience IC. Inclusion criteria for questionnaires were based on existence of a body of literature in symptomatic PAD. Results Six general questionnaires and 7 disease-specific questionnaires are included with details about the number of domains covered and how each tool is scored. The Medical Outcomes Study Short Form 36 item questionnaire and Walking Impairment Questionnaire are currently the most used general and disease-specific questionnaires at baseline and following treatment for IC, respectively. Conclusions The use of tools which assess functional status and HRQOL has importance in both the clinical and research areas to assess treatment efficacy from the patient perspective. Therefore, assessing HRQOL in addition to treadmill-measured walking ability provides insight as to effects of treatments on patient outcomes and may help guide therapy. PMID:21334172

  20. Tennessee Star-Quality Child Care Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Tennessee's Star-Quality Child Care Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  1. Louisiana Quality Start Child Care Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Louisiana's Quality Start Child Care Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs;…

  2. Indoor Air Quality Assessment of the San Francisco Federal Building

    SciTech Connect

    Apte, Michael; Bennett, Deborah H.; Faulkner, David; Maddalena, Randy L.; Russell, Marion L.; Spears, Michael; Sullivan, Douglas P; Trout, Amber L.

    2008-07-01

    An assessment of the indoor air quality (IAQ) of the San Francisco Federal Building (SFFB) was conducted on May 12 and 14, 2009 at the request of the General Services Administration (GSA). The purpose of the assessment was for a general screening of IAQ parameters typically indicative of well functioning building systems. One naturally ventilated space and one mechanically ventilated space were studied. In both zones, the levels of indoor air contaminants, including CO2, CO, particulate matter, volatile organic compounds, and aldehydes, were low, relative to reference exposure levels and air quality standards for comparable office buildings. We found slightly elevated levels of volatile organic compounds (VOCs) including two compounds often found in"green" cleaning products. In addition, we found two industrial solvents at levels higher than typically seen in office buildings, but the levels were not sufficient to be of a health concern. The ventilation rates in the two study spaces were high by any standard. Ventilation rates in the building should be further investigated and adjusted to be in line with the building design. Based on our measurements, we conclude that the IAQ is satisfactory in the zone we tested, but IAQ may need to be re-checked after the ventilation rates have been lowered.

  3. Quality assessment of perinatal and infant postmortem examinations in Turkey.

    PubMed

    Pakis, Isil; Karapirli, Mustafa; Karayel, Ferah; Turan, Arzu; Akyildiz, Elif; Polat, Oguz

    2008-09-01

    An autopsy examination is important in identifying the cause of death and as a means of auditing clinical and forensic practice; however, especially in perinatal and infantile age groups determining the cause of death leads to some difficulties in autopsy practice. In this study, 15,640 autopsies recorded during the years 2000-2004 in the Mortuary Department of the Council of Forensic Medicine were reviewed. Autopsy findings of 510 cases between 20 completed weeks of gestation and 1 year of age were analyzed retrospectively. The quality of each necropsy report was assessed using a modification of the system gestational age assessment described by Rushton, which objectively scores aspects identified by the Royal College of Pathologists as being part of a necropsy. According to their ages, the cases were subdivided into three groups. Intrauterine deaths were 31% (158 cases), neonatal deaths were 24% (123 cases), and infantile deaths were 45% (229 cases) of all cases. Scores for the quality of the necropsy report were above the minimum acceptable score with 44% in intrauterine, 88% in neonatal and infantile deaths. PMID:18637051

  4. Quality-control design for surface-water sampling in the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Mueller, David K.; Martin, Jeffrey D.; Lopes, Thomas J.

    1997-01-01

    The data-quality objectives of the National Water-Quality Assessment Program include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of chemical analyses of surface-water samples. The quality-control samples used to make these estimates include field blanks, field matrix spikes, and replicates. This report describes the design for collection of these quality-control samples in National Water-Quality Assessment Program studies and the data management needed to properly identify these samples in the U.S. Geological Survey's national data base.

  5. Can water quality of tubewells be assessed without chemical testing?

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammad A.; Butler, Adrian P.

    2016-04-01

    Arsenic is one of the major pollutants found in aquifers on a global scale. The screening of tubewells for arsenic has helped many people to avoid drinking from highly polluted wells in the Bengal Delta (West Bengal and Bangladesh). However, there are still many millions of tubewells in Bangladesh yet to be tested, and a substantial proportion of these are likely to contain excessive arsenic. Due to the level of poverty and lack of infrastructure, it is unlikely that the rest of the tubewells will be tested quickly. However, water quality assessment without needing a chemical testing may be helpful in this case. Studies have found that qualitative factors, such as staining in the tubewell basement and/or on utensils, can indicate subsurface geology and water quality. The science behind this staining is well established, red staining is associated with iron reduction leading to release of arsenic whilst black staining is associated with manganese reduction (any release of arsenic due to manganese reduction is sorbed back on the, yet to be reduced, iron), whereas mixed staining may indicate overlapping manganese and iron reduction at the tubewell screen. Reduction is not uniform everywhere and hence chemical water quality including dissolved arsenic varies from place to place. This is why coupling existing tubewell arsenic information with user derived staining data could be useful in predicting the arsenic status at a particular site. Using well location, depth, along with colour of staining, an assessment of both good (nutrients) and bad (toxins and pathogens) substances in the tubewell could be provided. Social-network technology, combined with increasing use of smartphones, provides a powerful opportunity for both sharing and providing feedback to the user. Here we outline how a simple digital application can couple the reception both qualitative and quantitative tubewell data into a centralised interactive database and provide manipulated feedback to an

  6. Spatial assessment of Langat River water quality using chemometrics.

    PubMed

    Juahir, Hafizan; Zain, Sharifuddin Md; Aris, Ahmad Zaharin; Yusoff, Mohd Kamil; Mokhtar, Mazlin Bin

    2010-01-01

    The present study deals with the assessment of Langat River water quality with some chemometrics approaches such as cluster and discriminant analysis coupled with an artificial neural network (ANN). The data used in this study were collected from seven monitoring stations under the river water quality monitoring program by the Department of Environment (DOE) from 1995 to 2002. Twenty three physico-chemical parameters were involved in this analysis. Cluster analysis successfully clustered the Langat River into three major clusters, namely high, moderate and less pollution regions. Discriminant analysis identified seven of the most significant parameters which contribute to the high variation of Langat River water quality, namely dissolved oxygen, biological oxygen demand, pH, ammoniacal nitrogen, chlorine, E. coli, and coliform. Discriminant analysis also plays an important role as an input selection parameter for an ANN of spatial prediction (pollution regions). The ANN showed better prediction performance in discriminating the regional area with an excellent percentage of correct classification compared to discriminant analysis. Multivariate analysis, coupled with ANN, is proposed, which could help in decision making and problem solving in the local environment. PMID:20082024

  7. Visual quality assessment of electrochromic and conventional glazings

    SciTech Connect

    Moeck, M.; Lee, E.S.; Rubin, M.D.; Sullivan, R.; Selkowitz, S.E.

    1996-09-01

    Variable transmission, ``switchable`` electrochromic glazings are compared to conventional static glazings using computer simulations to assess the daylighting quality of a commercial office environment where paper and computer tasks are performed. RADIANCE simulations were made for a west-facing commercial office space under clear and overcast sky conditions. This visualization tool was used to model different glazing types, to compute luminance and illuminance levels, and to generate a parametric set of photorealistic images of typical interior views at various times of the day and year. Privacy and visual display terminal (VDT) visibility is explored. Electrochromic glazings result in a more consistent glare-free daylit environment compared to their static counterparts. However, if the glazing is controlled to minimize glare or to maintain low interior daylight levels for critical visual tasks (e.g, VDT), occupants may object to the diminished quality of the outdoor view due to its low transmission (Tv = 0.08) during those hours. RADIANCE proved to be a very powerful tool to better understand some of the design tradeoffs of this emerging glazing technology. The ability to draw specific conclusions about the relative value of different technologies or control strategies is limited by the lack of agreed upon criteria or standards for lighting quality and visibility.

  8. Chip Design Process Optimization Based on Design Quality Assessment

    NASA Astrophysics Data System (ADS)

    Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel

    2010-06-01

    Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.

  9. Smolt Quality Assessment of Spring Chinook Salmon : Annual Report.

    SciTech Connect

    Zaugg, Waldo S.

    1991-04-01

    The physiological development and physiological condition of spring chinook salmon are being studied at several hatcheries in the Columbia River Basin. The purpose of the study is to determine whether any or several smolt indices can be related to adult recovery and be used to improve hatchery effectiveness. The tests conducted in 1989 on juvenile chinook salmon at Dworshak, Leavenworth, and Warm Springs National Fish Hatcheries, and the Oregon State Willamette Hatchery assessed saltwater tolerance, gill ATPase, cortisol, insulin, thyroid hormones, secondary stress, fish morphology, metabolic energy stores, immune response, blood cell numbers, and plasma ion concentrations. The study showed that smolt development may have occurred before the fish were released from the Willamette Hatchery, but not from the Dworshak, Leavenworth, or Warm Springs Hatcheries. These results will be compared to adult recovery data when they become available, to determine which smolt quality indices may be used to predict adult recovery. The relative rankings of smolt quality at the different hatcheries do not necessarily reflect the competency of the hatchery managers and staff, who have shown a high degree of professionalism and expertise in fish rearing. We believe that the differences in smolt quality are due to the interaction of genetic and environmental factors. One aim of this research is to identify factors that influence smolt development and that may be controlled through fish husbandry to regulate smolt development. 35 refs., 27 figs., 5 tabs.

  10. State-Level Cancer Quality Assessment and Research

    PubMed Central

    Lipscomb, Joseph; Gillespie, Theresa W.

    2016-01-01

    Over a decade ago, the Institute of Medicine called for a national cancer data system in the United States to support quality-of-care assessment and improvement, including research on effective interventions. Although considerable progress has been achieved in cancer quality measurement and effectiveness research, the nation still lacks a population-based data infrastructure for accurately identifying cancer patients and tracking services and outcomes over time. For compelling reasons, the most effective pathway forward may be the development of state-level cancer data systems, in which central registry data are linked to multiple public and private secondary sources. These would include administrative/claims files from Medicare, Medicaid, and private insurers. Moreover, such a state-level system would promote rapid learning by encouraging adoption of near-real-time reporting and feedback systems, such as the Commission on Cancer’s new Rapid Quality Reporting System. The groundwork for such a system is being laid in the state of Georgia, and similar work is advancing in other states. The pace of progress depends on the successful resolution of issues related to the application of information technology, financing, and governance. PMID:21799333

  11. Assessing hippocampal development and language in early childhood: Evidence from a new application of the Automatic Segmentation Adapter Tool.

    PubMed

    Lee, Joshua K; Nordahl, Christine W; Amaral, David G; Lee, Aaron; Solomon, Marjorie; Ghetti, Simona

    2015-11-01

    Volumetric assessments of the hippocampus and other brain structures during childhood provide useful indices of brain development and correlates of cognitive functioning in typically and atypically developing children. Automated methods such as FreeSurfer promise efficient and replicable segmentation, but may include errors which are avoided by trained manual tracers. A recently devised automated correction tool that uses a machine learning algorithm to remove systematic errors, the Automatic Segmentation Adapter Tool (ASAT), was capable of substantially improving the accuracy of FreeSurfer segmentations in an adult sample [Wang et al., 2011], but the utility of ASAT has not been examined in pediatric samples. In Study 1, the validity of FreeSurfer and ASAT corrected hippocampal segmentations were examined in 20 typically developing children and 20 children with autism spectrum disorder aged 2 and 3 years. We showed that while neither FreeSurfer nor ASAT accuracy differed by disorder or age, the accuracy of ASAT corrected segmentations were substantially better than FreeSurfer segmentations in every case, using as few as 10 training examples. In Study 2, we applied ASAT to 89 typically developing children aged 2 to 4 years to examine relations between hippocampal volume, age, sex, and expressive language. Girls had smaller hippocampi overall, and in left hippocampus this difference was larger in older than younger girls. Expressive language ability was greater in older children, and this difference was larger in those with larger hippocampi, bilaterally. Overall, this research shows that ASAT is highly reliable and useful to examinations relating behavior to hippocampal structure. PMID:26279309

  12. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  13. A procedure for extending input selection algorithms to low quality data in modelling problems with application to the automatic grading of uploaded assignments.

    PubMed

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis; Couso, Inés; Sánchez, Luciano

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  14. An integrated modeling process to assess water quality for watersheds

    NASA Astrophysics Data System (ADS)

    Bhuyan, Samarjyoti

    2001-07-01

    An integrated modeling process has been developed that combines remote sensing, Geographic Information Systems (GIS), and the Agricultural NonPoint Source Pollution (AGNPS) hydrologic model to assess water quality of a watershed. Remotely sensed Landsat Thematic Mapper (TM) images were used to obtain various land cover information of a watershed including sub-classes of rangeland and wheat land based on the estimates of vegetative cover and crop residue, respectively. AGNPS model input parameters including Universal Soil Loss Equation's (USLE) cropping factors (C-factors) were assigned to the landcover classes. The AGNPS-ARC INFO interface was used to extract input parameters from several GIS layers for the AGNPS model during several selected storm events for the sub-watersheds. Measured surface water quantity and quality data for these storm events were obtained from U.S. Geological Survey (USGS) gaging stations. Base flow separation was done to remove the base flow fraction of water and total suspended sediment (TSS), total nitrogen (total-N), and total phosphorous (total-P) from the total stream flow. Continuous antecedent moisture content ratios were developed for the sub-watersheds during the storm events and were used to adjust the Soil Conservation Service-Curve Numbers (SCS-CN) of various landcovers. A relationship was developed between storm amounts and estimated energy intensity (EI) values using a probability method (Koelliker and Humbert, 1989), and the EI values were used in running the AGNPS model input files. Several model parameters were calibrated against the measured water quality data and then the model was run on different sub-watersheds to evaluate the modeling process. This modeling process was found to be effective for smaller sub-watersheds having adequate rainfall data. However, in the case of large sub-watersheds with substantial variations of rainfall and landcover, this process was less satisfactory. This integrated modeling process will

  15. Computational mouse atlases and their application to automatic assessment of craniofacial dysmorphology caused by the Crouzon mutation Fgfr2C342Y

    PubMed Central

    Ólafsdóttir, Hildur; Darvann, Tron A; Hermann, Nuno V; Oubel, Estanislao; Ersbøll, Bjarne K; Frangi, Alejandro F; Larsen, Per; Perlyn, Chad A; Morriss-Kay, Gillian M; Kreiborg, Sven

    2007-01-01

    Crouzon syndrome is characterized by premature fusion of sutures and synchondroses. Recently, the first mouse model of the syndrome was generated, having the mutation Cys342Tyr in Fgfr2c, equivalent to the most common human Crouzon/Pfeiffer syndrome mutation. In this study, a set of micro-computed tomography (CT) scannings of the skulls of wild-type mice and Crouzon mice were analysed with respect to the dysmorphology caused by Crouzon syndrome. A computational craniofacial atlas was built automatically from the set of wild-type mouse micro-CT volumes using (1) affine and (2) non-rigid image registration. Subsequently, the atlas was deformed to match each subject from the two groups of mice. The accuracy of these registrations was measured by a comparison of manually placed landmarks from two different observers and automatically assessed landmarks. Both of the automatic approaches were within the interobserver accuracy for normal specimens, and the non-rigid approach was within the interobserver accuracy for the Crouzon specimens. Four linear measurements, skull length, height and width and interorbital distance, were carried out automatically using the two different approaches. Both automatic approaches assessed the skull length, width and height accurately for both groups of mice. The non-rigid approach measured the interorbital distance accurately for both groups while the affine approach failed to assess this parameter for both groups. Using the full capability of the non-rigid approach, local displacements obtained when registering the non-rigid wild-type atlas to a non-rigid Crouzon mouse atlas were determined on the surface of the wild-type atlas. This revealed a 0.6-mm bending in the nasal region and a 0.8-mm shortening of the zygoma, which are similar to characteristics previously reported in humans. The most striking finding of this analysis was an angulation of approximately 0.6 mm of the cranial base, which has not been reported in humans. Comparing

  16. Microbiological assessment of indoor air quality at different hospital sites.

    PubMed

    Cabo Verde, Sandra; Almeida, Susana Marta; Matos, João; Guerreiro, Duarte; Meneses, Marcia; Faria, Tiago; Botelho, Daniel; Santos, Mateus; Viegas, Carla

    2015-09-01

    Poor hospital indoor air quality (IAQ) may lead to hospital-acquired infections, sick hospital syndrome and various occupational hazards. Air-control measures are crucial for reducing dissemination of airborne biological particles in hospitals. The objective of this study was to perform a survey of bioaerosol quality in different sites in a Portuguese Hospital, namely the operating theater (OT), the emergency service (ES) and the surgical ward (SW). Aerobic mesophilic bacterial counts (BCs) and fungal load (FL) were assessed by impaction directly onto tryptic soy agar and malt extract agar supplemented with antibiotic chloramphenicol (0.05%) plates, respectively using a MAS-100 air sampler. The ES revealed the highest airborne microbial concentrations (BC range 240-736 CFU/m(3) CFU/m(3); FL range 27-933 CFU/m(3)), exceeding, at several sampling sites, conformity criteria defined in national legislation [6]. Bacterial concentrations in the SW (BC range 99-495 CFU/m(3)) and the OT (BC range 12-170 CFU/m(3)) were under recommended criteria. While fungal levels were below 1 CFU/m(3) in the OT, in the SW (range 1-32 CFU/m(3)), there existed a site with fungal indoor concentrations higher than those detected outdoors. Airborne Gram-positive cocci were the most frequent phenotype (88%) detected from the measured bacterial population in all indoor environments. Staphylococcus (51%) and Micrococcus (37%) were dominant among the bacterial genera identified in the present study. Concerning indoor fungal characterization, the prevalent genera were Penicillium (41%) and Aspergillus (24%). Regular monitoring is essential for assessing air control efficiency and for detecting irregular introduction of airborne particles via clothing of visitors and medical staff or carriage by personal and medical materials. Furthermore, microbiological survey data should be used to clearly define specific air quality guidelines for controlled environments in hospital settings. PMID

  17. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  18. No-reference image quality assessment in the spatial domain.

    PubMed

    Mittal, Anish; Moorthy, Anush Krishna; Bovik, Alan Conrad

    2012-12-01

    We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation. PMID:22910118

  19. Reliability of Quality Assessments in Research Synthesis: Securing the Highest Quality Bioinformation for HIT.

    PubMed

    Chiappelli, Francesco; Phil, André Barkhordarian C; Arora, Rashi; Phi, Linda; Giroux, Amy; Uyeda, Molly; Kung, Jason; Ramchandani, Manisha

    2012-01-01

    Current trends in bio-medicine include research synthesis and dissemination of bioinformation by means of health (bio) information technology (H[b] IT). Research must secure the validity and reliability of assessment tools to quantify research quality in the pursuit of the best available evidence. Our concerted work in this domain led to the revision of three instruments for that purpose, including the stringent characterization of inter-rater reliability and coefficient of agreement. It is timely and critical to advance the methodological development of the science of research synthesis by strengthening the reliability of existing measure of research quality in order to ensure H[b] IT efficacy and effectiveness. PMID:23055612

  20. Towards a Fuzzy Expert System on Toxicological Data Quality Assessment.

    PubMed

    Yang, Longzhi; Neagu, Daniel; Cronin, Mark T D; Hewitt, Mark; Enoch, Steven J; Madden, Judith C; Przybylak, Katarzyna

    2013-01-01

    Quality assessment (QA) requires high levels of domain-specific experience and knowledge. QA tasks for toxicological data are usually performed by human experts manually, although a number of quality evaluation schemes have been proposed in the literature. For instance, the most widely utilised Klimisch scheme1 defines four data quality categories in order to tag data instances with respect to their qualities; ToxRTool2 is an extension of the Klimisch approach aiming to increase the transparency and harmonisation of the approach. Note that the processes of QA in many other areas have been automatised by employing expert systems. Briefly, an expert system is a computer program that uses a knowledge base built upon human expertise, and an inference engine that mimics the reasoning processes of human experts to infer new statements from incoming data. In particular, expert systems have been extended to deal with the uncertainty of information by representing uncertain information (such as linguistic terms) as fuzzy sets under the framework of fuzzy set theory and performing inferences upon fuzzy sets according to fuzzy arithmetic. This paper presents an experimental fuzzy expert system for toxicological data QA which is developed on the basis of the Klimisch approach and the ToxRTool in an effort to illustrate the power of expert systems to toxicologists, and to examine if fuzzy expert systems are a viable solution for QA of toxicological data. Such direction still faces great difficulties due to the well-known common challenge of toxicological data QA that "five toxicologists may have six opinions". In the meantime, this challenge may offer an opportunity for expert systems because the construction and refinement of the knowledge base could be a converging process of different opinions which is of significant importance for regulatory policy making under the regulation of REACH, though a consensus may never be reached. Also, in order to facilitate the implementation

  1. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  2. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    SciTech Connect

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-06-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  3. A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas

    NASA Astrophysics Data System (ADS)

    Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.

    2007-03-01

    Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.

  4. Assessing ECG signal quality indices to discriminate ECGs with artefacts from pathologically different arrhythmic ECGs.

    PubMed

    Daluwatte, C; Johannesen, L; Galeotti, L; Vicente, J; Strauss, D G; Scully, C G

    2016-08-01

    False and non-actionable alarms in critical care can be reduced by developing algorithms which assess the trueness of an arrhythmia alarm from a bedside monitor. Computational approaches that automatically identify artefacts in ECG signals are an important branch of physiological signal processing which tries to address this issue. Signal quality indices (SQIs) derived considering differences between artefacts which occur in ECG signals and normal QRS morphology have the potential to discriminate pathologically different arrhythmic ECG segments as artefacts. Using ECG signals from the PhysioNet/Computing in Cardiology Challenge 2015 training set, we studied previously reported ECG SQIs in the scientific literature to differentiate ECG segments with artefacts from arrhythmic ECG segments. We found that the ability of SQIs to discriminate between ECG artefacts and arrhythmic ECG varies based on arrhythmia type since the pathology of each arrhythmic ECG waveform is different. Therefore, to reduce the risk of SQIs classifying arrhythmic events as noise it is important to validate and test SQIs with databases that include arrhythmias. Arrhythmia specific SQIs may also minimize the risk of misclassifying arrhythmic events as noise. PMID:27454007

  5. Benchmarking Dosimetric Quality Assessment of Prostate Intensity-Modulated Radiotherapy

    SciTech Connect

    Senthi, Sashendra; Gill, Suki S.; Haworth, Annette; Kron, Tomas; Cramb, Jim; Rolfo, Aldo; Thomas, Jessica; Duchesne, Gillian M.; Hamilton, Christopher H.; Joon, Daryl Lim; Bowden, Patrick; Foroudi, Farshad

    2012-02-01

    Purpose: To benchmark the dosimetric quality assessment of prostate intensity-modulated radiotherapy and determine whether the quality is influenced by disease or treatment factors. Patients and Methods: We retrospectively analyzed the data from 155 consecutive men treated radically for prostate cancer using intensity-modulated radiotherapy to 78 Gy between January 2007 and March 2009 across six radiotherapy treatment centers. The plan quality was determined by the measures of coverage, homogeneity, and conformity. Tumor coverage was measured using the planning target volume (PTV) receiving 95% and 100% of the prescribed dose (V{sub 95%} and V{sub 100%}, respectively) and the clinical target volume (CTV) receiving 95% and 100% of the prescribed dose. Homogeneity was measured using the sigma index of the PTV and CTV. Conformity was measured using the lesion coverage factor, healthy tissue conformity index, and the conformity number. Multivariate regression models were created to determine the relationship between these and T stage, risk status, androgen deprivation therapy use, treatment center, planning system, and treatment date. Results: The largest discriminatory measurements of coverage, homogeneity, and conformity were the PTV V{sub 95%}, PTV sigma index, and conformity number. The mean PTV V{sub 95%} was 92.5% (95% confidence interval, 91.3-93.7%). The mean PTV sigma index was 2.10 Gy (95% confidence interval, 1.90-2.20). The mean conformity number was 0.78 (95% confidence interval, 0.76-0.79). The treatment center independently influenced the coverage, homogeneity, and conformity (all p < .0001). The planning system independently influenced homogeneity (p = .038) and conformity (p = .021). The treatment date independently influenced the PTV V{sub 95%} only, with it being better at the start (p = .013). Risk status, T stage, and the use of androgen deprivation therapy did not influence any aspect of plan quality. Conclusion: Our study has benchmarked measures

  6. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research

    PubMed Central

    Weng, Chunhua

    2013-01-01

    Objective To review the methods and dimensions of data quality assessment in the context of electronic health record (EHR) data reuse for research. Materials and methods A review of the clinical research literature discussing data quality assessment methodology for EHR data was performed. Using an iterative process, the aspects of data quality being measured were abstracted and categorized, as well as the methods of assessment used. Results Five dimensions of data quality were identified, which are completeness, correctness, concordance, plausibility, and currency, and seven broad categories of data quality assessment methods: comparison with gold standards, data element agreement, data source agreement, distribution comparison, validity checks, log review, and element presence. Discussion Examination of the methods by which clinical researchers have investigated the quality and suitability of EHR data for research shows that there are fundamental features of data quality, which may be difficult to measure, as well as proxy dimensions. Researchers interested in the reuse of EHR data for clinical research are recommended to consider the adoption of a consistent taxonomy of EHR data quality, to remain aware of the task-dependence of data quality, to integrate work on data quality assessment from other fields, and to adopt systematic, empirically driven, statistically based methods of data quality assessment. Conclusion There is currently little consistency or potential generalizability in the methods used to assess EHR data quality. If the reuse of EHR data for clinical research is to become accepted, researchers should adopt validated, systematic methods of EHR data quality assessment. PMID:22733976

  7. Using the Baldrige Criteria To Assess Quality in Libraries.

    ERIC Educational Resources Information Center

    Ashar, Hanna; Geiger, Sharon

    1998-01-01

    The Malcolm Baldrige National Quality Award (MBNQA) for quality programs has seven categories against which organizations are evaluated (leadership, information and analysis, strategic/operational quality planning, human resources, process management, quality/performance results, customer/student satisfaction). A preliminary quality-assessment…

  8. Defining and Assessing Quality in Early Childhood Centres.

    ERIC Educational Resources Information Center

    Farquhar, Sarah-Eve J.

    This paper examines the problem of defining quality in early childhood centers, the nature of evaluation methods, and the contributions of research to the promotion of high quality. The concept of quality is multidimensional and dynamic, and there is no consensus about a definition of quality in the literature. Quality can be viewed from many…

  9. Assessing Website Pharmacy Drug Quality: Safer Than You Think?

    PubMed Central

    Bate, Roger; Hess, Kimberly

    2010-01-01

    Background Internet-sourced drugs are often considered suspect. The World Health Organization reports that drugs from websites that conceal their physical address are counterfeit in over 50 percent of cases; the U.S. Food and Drug Administration (FDA) works with the National Association of Boards of Pharmacy (NABP) to regularly update a list of websites likely to sell drugs that are illegal or of questionable quality. Methods and Findings This study examines drug purchasing over the Internet, by comparing the sales of five popular drugs from a selection of websites stratified by NABP or other ratings. The drugs were assessed for price, conditions of purchase, and basic quality. Prices and conditions of purchase varied widely. Some websites advertised single pills while others only permitted the purchase of large quantities. Not all websites delivered the exact drugs ordered, some delivered no drugs at all; many websites shipped from multiple international locations, and from locations that were different from those advertised on the websites. All drug samples were tested against approved U.S. brand formulations using Raman spectrometry. Many (17) websites substituted drugs, often in different formulations from the brands requested. These drugs, some of which were probably generics or perhaps non-bioequivalent copy versions, could not be assessed accurately. Of those drugs that could be assessed, none failed from “approved”, “legally compliant” or “not recommended” websites (0 out of 86), whereas 8.6% (3 out of 35) failed from “highly not recommended” and unidentifiable websites. Conclusions Of those drugs that could be assessed, all except Viagra® passed spectrometry testing. Of those that failed, few could be identified either by a country of manufacture listed on the packaging, or by the physical location of the website pharmacy. If confirmed by future studies on other drug samples, then U.S. consumers should be able to reduce their risk by

  10. Image Quality and Radiation Dose of CT Coronary Angiography with Automatic Tube Current Modulation and Strong Adaptive Iterative Dose Reduction Three-Dimensional (AIDR3D)

    PubMed Central

    Shen, Hesong; Dai, Guochao; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Liang, Dan; Wang, Xinhua; Zhu, Dongyun; Li, Wenru; Qiu, Jianping

    2015-01-01

    Purpose To investigate image quality and radiation dose of CT coronary angiography (CTCA) scanned using automatic tube current modulation (ATCM) and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP) respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product. Results Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05); compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001). Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%), 486 (96.4%), and 394 (93.8%) in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%), 472 (93.7%), 378 (90.0%), respectively; those for FBP were 217 (64.6%), 173 (34.3%), 114 (27.1%), respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5%) were higher than those of standard AIDR3D (1182, 93.8%) and FBP (504, 40.0%); the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001). The mean effective radiation dose was (2.55±1.21) mSv. Conclusion

  11. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  12. Groundwater Quality Assessment for Waste Management Area U: First Determination

    SciTech Connect

    Hodges, Floyd N.; Chou, Charissa J.

    2000-08-04

    As a result of the most recent recalculation one of the indicator parameters, specific conductance, exceeded its background value in downgradient well 299-W19-41, triggering a change from detection monitoring to groundwater quality assessment program. The major contributors to the higher specific conductance are nonhazardous constituents (i.e., sodium, calcium, magnesium, chloride, sulfate, and bicarbonate). Nitrate, chromium, and technetium-99 are present and are increasing; however, they are significantly below their drinking waster standards. Interpretation of groundwater monitoring data indicates that both the nonhazardous constituents causing elevated specific conductance in groundwater and the tank waste constituents present in groundwater at the waste management area are a result of surface water infiltration in the southern portion of the facility. There is evidence for both upgradient and waste management area sources for observed nitrate concentrations. There is no indication of an upgradient source for the observed chromium and technetium-99.

  13. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  14. Assessment of selected ground-water-quality data in Montana

    USGS Publications Warehouse

    Davis, R.E.; Rogers, G.D.

    1984-01-01

    Ground-water-quality data for Montana in the U.S. Geological Survey 's computer data file WATSTORE were evaluated for nine geohydrologic units in part of the State east of the Rocky Mountains and for two geohydrologic units in the western mountainous part of the stated region. The availability of data for inorganic, trace, and organic constituents for each grouping of units was assessed. Median dissolved-solids concentrations for the groupings of units range from about 100 to 5,000 milligrams per liter. However, the number and distribution of data sites for some groupings of units wee inadequate to be representative of the aquifer as a whole. Concentrations of most trace constituents do not exceed Federal primary drinking-water standards, although exceptions occur. Few data were available for organic constituents. (USGS)

  15. Marine environment quality assessment of the Skagerrak - Kattegat

    NASA Astrophysics Data System (ADS)

    Rosenberg, Rutger; Cato, Ingemar; Förlin, Lars; Grip, Kjell; Rodhe, Johan

    1996-02-01

    This quality assessment of the Skagerrak-Kattegat is mainly based on recent results obtained within the framework of the Swedish multidisciplinary research projekt 'Large-scale environmental effects and ecological processes in the Skagerrak-Kattegat' completed with relevant data from other research publications. The results show that the North Sea has a significant impact on the marine ecosystem in the Skagerrak and the northern Kattegat. Among environmental changes recently documented for some of these areas are: increased nutrient concentrations, increased occurrence of fast-growing filamentous algae in coastal areas affecting nursery and feeding conditions for fish, declining bottom water oxygen concentrations with negative effects on benthic fauna, and sediment toxicity to invertebrates also causing physiological responses in fish. It is concluded that, due to eutrophication and toxic substances, large-scale environmental changes and effects occur in the Skagerrak-Kattegat area.

  16. Ecological quality assessment of the lower Lima Estuary.

    PubMed

    Costa-Dias, Sérgia; Sousa, Ronaldo; Antunes, Carlos

    2010-01-01

    Monitoring biotic factors is gaining in importance within Europe, due in large extent to the ecological approach of the European Water Framework Directive (WFD) and the importance attributed to biological elements in the assessment of quality status. Despite its ecological importance, the Lima Estuary is subjected to a range of perturbations, including urban, agricultural and industrial waste discharge, dredging activities, and introduction of non-indigenous invasive species. This work uses macrozoobenthic data to study the ecological status of the lower Lima Estuary where most disturbance factors are concentrated. We were able to verify consistent differences along space, and to identify different degrees of disturbance in the estuarine area. These results allow us to suggest cost-effective approaches to monitor this estuarine area, aiming on contributing to effective management actions. PMID:20347451

  17. Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset

    NASA Astrophysics Data System (ADS)

    Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.

    2011-12-01

    GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the

  18. Availability of Structured and Unstructured Clinical Data for Comparative Effectiveness Research and Quality Improvement: A Multisite Assessment

    PubMed Central

    Capurro, Daniel; PhD, Meliha Yetisgen; van Eaton, Erik; Black, Robert; Tarczy-Hornoch, Peter

    2014-01-01

    Introduction: A key attribute of a learning health care system is the ability to collect and analyze routinely collected clinical data in order to quickly generate new clinical evidence, and to monitor the quality of the care provided. To achieve this vision, clinical data must be easy to extract and stored in computer readable formats. We conducted this study across multiple organizations to assess the availability of such data specifically for comparative effectiveness research (CER) and quality improvement (QI) on surgical procedures. Setting: This study was conducted in the context of the data needed for the already established Surgical Care and Outcomes Assessment Program (SCOAP), a clinician-led, performance benchmarking, and QI registry for surgical and interventional procedures in Washington State. Methods: We selected six hospitals, managed by two Health Information Technology (HIT) groups, and assessed the ease of automated extraction of the data required to complete the SCOAP data collection forms. Each data element was classified as easy, moderate, or complex to extract. Results: Overall, a significant proportion of the data required to automatically complete the SCOAP forms was not stored in structured computer-readable formats, with more than 75 percent of all data elements being classified as moderately complex or complex to extract. The distribution differed significantly between the health care systems studied. Conclusions: Although highly desirable, a learning health care system does not automatically emerge from the implementation of electronic health records (EHRs). Innovative methods to improve the structured capture of clinical data are needed to facilitate the use of routinely collected clinical data for patient phenotyping. PMID:25848594

  19. Combined use of rapid bioassessment protocols and sediment quality triad to assess stream quality

    USGS Publications Warehouse

    Winger, P.V.; Lasier, P.J.; Bogenrieder, K.J.

    2005-01-01

    Physical, chemical and biological conditions at five stations on a small southeastern stream were evaluated using the Rapid Bioassessment Protocols (RBP) and the Sediment Quality Triad (SQT) to assess potential biological impacts of a municipal wastewater treatment facility (WWTF) on downstream resources. Physical habitat, benthic macroinvertebrates and fish assemblages were impaired at Stations 1 and 2 (upstream of the WWTF), suggesting that the degraded physical habitat was adversely impacting the fish and benthic populations. The SQT also demonstrated that Stations 1 and 2 were degraded, but the factors responsible for the impaired conditions were attributed to the elevated concentrations of polycylclic aromatic hydrocarbons (PAHs) and metals (Mn, Pb) in the sediments. The source of contaminants to the upper reaches of the stream appears to be storm-water runoff from the city center. Increased discharge and stabilized base flow contributed by the WWTF appeared to benefit the physically-altered stream system. Although the two assessment procedures demonstrated biological impairment at the upstream stations, the environmental factors identified as being responsible for the impairment were different: the RBP provided insight into contributions associated with the physical habitat and the SQT contributed information on contaminants and sediment quality. Both procedures are important in the identification of physical and chemical factors responsible for environmental impairment and together they provide information critical to the development of appropriate management options for mitigation.

  20. Assessing The Policy Relevance of Regional Air Quality Models

    NASA Astrophysics Data System (ADS)

    Holloway, T.

    This work presents a framework for discussing the policy relevance of models, and regional air quality models in particular. We define four criteria: 1) The scientific status of the model; 2) Its ability to address primary environmental concerns; 3) The position of modeled environmental issues on the political agenda; and 4) The role of scientific input into the policy process. This framework is applied to current work simulating the transport of nitric acid in Asia with the ATMOS-N model, to past studies on air pollution transport in Europe with the EMEP model, and to future applications of the United States Environmental Protection Agency (US EPA) Models-3. The Lagrangian EMEP model provided critical input to the development of the 1994 Oslo and 1999 Gothenburg Protocols to the Convention on Long-Range Transbound- ary Air Pollution, as well as to the development of EU directives, via its role as a component of the RAINS integrated assessment model. Our work simulating reactive nitrogen in Asia follows the European example in part, with the choice of ATMOS-N, a regional Lagrangian model to calculate source-receptor relationships for the RAINS- Asia integrated assessment model. However, given differences between ATMOS-N and the EMEP model, as well as differences between the scientific and political cli- mates facing Europe ten years ago and Asia today, the role of these two models in the policy process is very different. We characterize the different aspects of policy relevance between these models using our framework, and consider how the current generation US EPA air quality model compares, in light of its Eulerian structure, dif- ferent objectives, and the policy context of the US.