Science.gov

Sample records for automatic quality assessment

  1. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  2. Automatic quality assessment protocol for MRI equipment.

    PubMed

    Bourel, P; Gibon, D; Coste, E; Daanen, V; Rousseau, J

    1999-12-01

    The authors have developed a protocol and software for the quality assessment of MRI equipment with a commercial test object. Automatic image analysis consists of detecting surfaces and objects, defining regions of interest, acquiring reference point coordinates and establishing gray level profiles. Signal-to-noise ratio, image uniformity, geometrical distortion, slice thickness, slice profile, and spatial resolution are checked. The results are periodically analyzed to evaluate possible drifts with time. The measurements are performed weekly on three MRI scanners made by the Siemens Company (VISION 1.5T, EXPERT 1.0T, and OPEN 0.2T). The results obtained for the three scanners over approximately 3.5 years are presented, analyzed, and compared.

  3. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  4. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  5. Automatic assessment of voice quality according to the GRBAS scale.

    PubMed

    Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2006-01-01

    Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.

  6. Automatic MeSH term assignment and quality assessment.

    PubMed Central

    Kim, W.; Aronson, A. R.; Wilbur, W. J.

    2001-01-01

    For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203

  7. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. PMID:24661606

  8. Automatic Assessment of Pathological Voice Quality Using Higher-Order Statistics in the LPC Residual Domain

    NASA Astrophysics Data System (ADS)

    Lee, Ji Yeoun; Hahn, Minsoo

    2010-12-01

    A preprocessing scheme based on linear prediction coefficient (LPC) residual is applied to higher-order statistics (HOSs) for automatic assessment of an overall pathological voice quality. The normalized skewness and kurtosis are estimated from the LPC residual and show statistically meaningful distributions to characterize the pathological voice quality. 83 voice samples of the sustained vowel /a/ phonation are used in this study and are independently assessed by a speech and language therapist (SALT) according to the grade of the severity of dysphonia of GRBAS scale. These are used to train and test classification and regression tree (CART). The best result is obtained using an optima l decision tree implemented by a combination of the normalized skewness and kurtosis, with an accuracy of 92.9%. It is concluded that the method can be used as an assessment tool, providing a valuable aid to the SALT during clinical evaluation of an overall pathological voice quality.

  9. Particle quality assessment and sorting for automatic and semiautomatic particle-picking techniques.

    PubMed

    Vargas, J; Abrishami, V; Marabini, R; de la Rosa-Trevín, J M; Zaldivar, A; Carazo, J M; Sorzano, C O S

    2013-09-01

    Three-dimensional reconstruction of biological specimens using electron microscopy by single particle methodologies requires the identification and extraction of the imaged particles from the acquired micrographs. Automatic and semiautomatic particle selection approaches can localize these particles, minimizing the user interaction, but at the cost of selecting a non-negligible number of incorrect particles, which can corrupt the final three-dimensional reconstruction. In this work, we present a novel particle quality assessment and sorting method that can separate most erroneously picked particles from correct ones. The proposed method is based on multivariate statistical analysis of a particle set that has been picked previously using any automatic or manual approach. The new method uses different sets of particle descriptors, which are morphology-based, histogram-based and signal to noise analysis based. We have tested our proposed algorithm with experimental data obtaining very satisfactory results. The algorithm is freely available as a part of the Xmipp 3.0 package [http://xmipp.cnb.csic.es].

  10. Groupwise conditional random forests for automatic shape classification and contour quality assessment in radiotherapy planning.

    PubMed

    McIntosh, Chris; Svistoun, Igor; Purdie, Thomas G

    2013-06-01

    Radiation therapy is used to treat cancer patients around the world. High quality treatment plans maximally radiate the targets while minimally radiating healthy organs at risk. In order to judge plan quality and safety, segmentations of the targets and organs at risk are created, and the amount of radiation that will be delivered to each structure is estimated prior to treatment. If the targets or organs at risk are mislabelled, or the segmentations are of poor quality, the safety of the radiation doses will be erroneously reviewed and an unsafe plan could proceed. We propose a technique to automatically label groups of segmentations of different structures from a radiation therapy plan for the joint purposes of providing quality assurance and data mining. Given one or more segmentations and an associated image we seek to assign medically meaningful labels to each segmentation and report the confidence of that label. Our method uses random forests to learn joint distributions over the training features, and then exploits a set of learned potential group configurations to build a conditional random field (CRF) that ensures the assignment of labels is consistent across the group of segmentations. The CRF is then solved via a constrained assignment problem. We validate our method on 1574 plans, consisting of 17[Formula: see text] 579 segmentations, demonstrating an overall classification accuracy of 91.58%. Our results also demonstrate the stability of RF with respect to tree depth and the number of splitting variables in large data sets.

  11. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  12. Automatic Programming Assessment.

    ERIC Educational Resources Information Center

    Hung, Sheung-Lun; And Others

    1993-01-01

    Discusses software metrics and describes a study of graduate students in Hong Kong that evaluated the relevance of using software metrics as a means of assessing students' performance in programing. The use of four basic metrics to measure programing skill, complexity, programing style, and programing efficiency using Pascal is examined. (13…

  13. Image quality and automatic color equalization

    NASA Astrophysics Data System (ADS)

    Chambah, M.; Rizzi, A.; Saint Jean, C.

    2007-01-01

    In the professional movie field, image quality is mainly judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a restored movie is also estimated subjectively by experts [26,27]. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality [28]. Moreover, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their use in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements of the field [29,25]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. The ACE method, for Automatic Color Equalization [1,2], is an algorithm for digital images unsupervised enhancement. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. We present in this paper is the use of ACE as a basis of a reference free image quality metric. ACE output is an estimate of our visual perception of a scene. The assumption, tested in other papers [3,4], is that ACE enhancing images is in the way our vision system will perceive them, increases their overall perceived quality. The basic idea proposed in this paper, is that ACE output can differ from the input more or less according to the visual quality of the input image In other word, an image appears good if it is near to the visual appearance we (estimate to) have of it. Reversely bad quality images will need "more filtering". Test

  14. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  15. An automatic LCD panel quality detection system

    NASA Astrophysics Data System (ADS)

    Guo, Bianfang; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Automatic detection using computer vision expands rapidly along with the development of image processing technology. In this paper, we developed a rapid LCD quality detection system for automobile instrument panel production, which has wide range of usage and good stability.Our automatic detection system consists of four parts: panel fixture, signal generator module, image acquisition module and image processing software. Experiments demonstrated that our system is feasible, efficient and fast compared to manual detection.

  16. Graphonomics, Automaticity and Handwriting Assessment

    ERIC Educational Resources Information Center

    Tucha, Oliver; Tucha, Lara; Lange, Klaus W.

    2008-01-01

    A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate…

  17. Back-and-Forth Methodology for Objective Voice Quality Assessment: From/to Expert Knowledge to/from Automatic Classification of Dysphonia

    NASA Astrophysics Data System (ADS)

    Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine

    2009-12-01

    This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.

  18. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  19. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  20. Automatization of Student Assessment Using Multimedia Technology.

    ERIC Educational Resources Information Center

    Taniar, David; Rahayu, Wenny

    Most use of multimedia technology in teaching and learning to date has emphasized the teaching aspect only. An application of multimedia in examinations has been neglected. This paper addresses how multimedia technology can be applied to the automatization of assessment, by proposing a prototype of a multimedia question bank, which is able to…

  1. Automatic quality of life prediction using electronic medical records.

    PubMed

    Pakhomov, Sergeui; Shah, Nilay; Hanson, Penny; Balasubramaniam, Saranya; Smith, Steven A; Smith, Steven Allan

    2008-11-06

    Health related quality of life (HRQOL) is an important variable used for prognosis and measuring outcomes in clinical studies and for quality improvement. We explore the use of a general pur-pose natural language processing system Metamap in combination with Support Vector Machines (SVM) for predicting patient responses on standardized HRQOL assessment instruments from text of physicians notes. We surveyed 669 patients in the Mayo Clinic diabetes registry using two instruments designed to assess functioning: EuroQoL5D and SF36/SD6. Clinical notes for these patients were represented as sets of medical concepts using Metamap. SVM classifiers were trained using various feature selection strategies. The best concordance between the HRQOL instruments and automatic classification was achieved along the pain dimension (positive agreement .76, negative agreement .78, kappa .54) using Metamap. We conclude that clinicians notes may be used to develop a surrogate measure of patients HRQOL status.

  2. Self-assessing target with automatic feedback

    DOEpatents

    Larkin, Stephen W.; Kramer, Robert L.

    2004-03-02

    A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.

  3. Automatic measuring of quality criteria for heart valves

    NASA Astrophysics Data System (ADS)

    Condurache, Alexandru Paul; Hahn, Tobias; Hofmann, Ulrich G.; Scharfschwerdt, Michael; Misfeld, Martin; Aach, Til

    2007-03-01

    Patients suffering from a heart valve deficiency are often treated by replacing the valve with an artificial or biological implant. In case of biological implants, the use of porcine heart valves is common. Quality assessment and inspection methods are mandatory to supply the patients (and also medical research) with only the best such xenograft implants thus reducing the number of follow-up surgeries to replace worn-up valves. We describe an approach for automatic in-vitro evaluation of prosthetic heart valves in an artificial circulation system. We show how to measure the orifice area during a heart cycle to obtain an orifice curve. Different quality parameters are then estimated on such curves.

  4. Assessing facial wrinkles: automatic detection and quantification

    NASA Astrophysics Data System (ADS)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2009-02-01

    Nowadays, documenting the face appearance through imaging is prevalent in skin research, therefore detection and quantitative assessment of the degree of facial wrinkling is a useful tool for establishing an objective baseline and for communicating benefits to facial appearance due to cosmetic procedures or product applications. In this work, an algorithm for automatic detection of facial wrinkles is developed, based on estimating the orientation and the frequency of elongated features apparent on faces. By over-filtering the skin texture image with finely tuned oriented Gabor filters, an enhanced skin image is created. The wrinkles are detected by adaptively thresholding the enhanced image, and the degree of wrinkling is estimated based on the magnitude of the filter responses. The algorithm is tested against a clinically scored set of images of periorbital lines of different severity and we find that the proposed computational assessment correlates well with the corresponding clinical scores.

  5. Defining and Assessing Quality.

    ERIC Educational Resources Information Center

    Fincher, Cameron, Ed.

    The seven papers in this monograph focus on defining and assessing quality. The paper are: (1) "Reflections on Design Ideals" (E. Grady Bogue), which addresses some "governing ideals" of collegiate quality; (2) "Between a Rock and a Hard Place: Investment and Quality in Higher Education" (Sven Groennings), which sees the competitive quality of…

  6. Automatic Assessment of Socioeconomic Impact on Cardiac Rehabilitation

    PubMed Central

    Calvo, Mireia; Subirats, Laia; Ceccaroni, Luigi; Maroto, José María; de Pablo, Carmen; Miralles, Felip

    2013-01-01

    Disability-Adjusted Life Years (DALYs) and Quality-Adjusted Life Years (QALYs), which capture life expectancy and quality of the remaining life-years, are applied in a new method to measure socioeconomic impacts related to health. A 7-step methodology estimating the impact of health interventions based on DALYs, QALYs and functioning changes is presented. It relates the latter (1) to the EQ-5D-5L questionnaire (2) to automatically calculate the health status before and after the intervention (3). This change of status is represented as a change in quality of life when calculating QALYs gained due to the intervention (4). In order to make an economic assessment, QALYs gained are converted to DALYs averted (5). Then, by inferring the cost/DALY from the cost associated to the disability in terms of DALYs lost (6) and taking into account the cost of the action, cost savings due to the intervention are calculated (7) as an objective measure of socioeconomic impact. The methodology is implemented in Java. Cases within the framework of cardiac rehabilitation processes are analyzed and the calculations are based on 200 patients who underwent different cardiac-rehabilitation processes. Results show that these interventions result, on average, in a gain in QALYs of 0.6 and a cost savings of 8,000 €. PMID:24284349

  7. [Quality assessment in surgery].

    PubMed

    Espinoza G, Ricardo; Espinoza G, Juan Pablo

    2016-06-01

    This paper deals with quality from the perspective of structure, processes and indicators in surgery. In this specialty, there is a close relationship between effectiveness and quality. We review the definition and classification of surgical complications as an objective means of assessing quality. The great diversity of definitions and risk assessments of surgical complications hampered the comparisons of different surgical centers or the evaluation of a single center along time. We discuss the different factors associated with surgical risk and some of the predictive systems for complications and mortality. At the present time, standarized definitions and comparisons are carried out correcting for risk factors. Thus, indicators of mortality, complications, hospitalization length, postoperative quality of life and costs become comparable between different groups. The volume of procedures of a determinate center or surgeon as a quality indicator is emphasized. PMID:27598495

  8. Quality Assessment in Oncology

    SciTech Connect

    Albert, Jeffrey M.; Das, Prajnan

    2012-07-01

    The movement to improve healthcare quality has led to a need for carefully designed quality indicators that accurately reflect the quality of care. Many different measures have been proposed and continue to be developed by governmental agencies and accrediting bodies. However, given the inherent differences in the delivery of care among medical specialties, the same indicators will not be valid across all of them. Specifically, oncology is a field in which it can be difficult to develop quality indicators, because the effectiveness of an oncologic intervention is often not immediately apparent, and the multidisciplinary nature of the field necessarily involves many different specialties. Existing and emerging comparative effectiveness data are helping to guide evidence-based practice, and the increasing availability of these data provides the opportunity to identify key structure and process measures that predict for quality outcomes. The increasing emphasis on quality and efficiency will continue to compel the medical profession to identify appropriate quality measures to facilitate quality improvement efforts and to guide accreditation, credentialing, and reimbursement. Given the wide-reaching implications of quality metrics, it is essential that they be developed and implemented with scientific rigor. The aims of the present report were to review the current state of quality assessment in oncology, identify existing indicators with the best evidence to support their implementation, and propose a framework for identifying and refining measures most indicative of true quality in oncologic care.

  9. Automatic processes in aggression: Conceptual and assessment issues.

    PubMed

    Bluemke, Matthias; Teige-Mocigemba, Sarah

    2015-01-01

    This editorial to the special section "Automatic Processes in Aggression: Conceptual and Assessment Issues" introduces major research lines, all of which culminate in recent advances in the measurement of automatic components in aggressive behavior. Researchers of almost all psychological disciplines have stressed increasingly the importance of automatic components to gain a comprehensive psychological understanding of human behavior. This is reflected in current dual-process theories according to which both controlled processes and rather automatic processes elicit behavior in a synergistic or antagonistic way. As a consequence, complementing self-reports (assumed to assess predominantly controlled processes) by the use of implicit measures (assumed to assess predominantly automatic processes) has become common practice in various domains. We familiarize the reader with the three contributions that illuminate how such a distinction can further our understanding of human aggression. At the same time, it becomes evident that there is a long way that method-oriented researchers need to go before we can fully comprehend how to best measure automatic processes in aggression. We see the present special section as an invigorating call to contribute to this endeavor. Aggr. Behav. 41:44-50 2015. © 2014 Wiley Periodicals, Inc.

  10. An automatic method for CASP9 free modeling structure prediction assessment

    PubMed Central

    Cong, Qian; Kinch, Lisa N.; Pei, Jimin; Shi, Shuoyong; Grishin, Vyacheslav N.; Li, Wenlin; Grishin, Nick V.

    2011-01-01

    Motivation: Manual inspection has been applied to and is well accepted for assessing critical assessment of protein structure prediction (CASP) free modeling (FM) category predictions over the years. Such manual assessment requires expertise and significant time investment, yet has the problems of being subjective and unable to differentiate models of similar quality. It is beneficial to incorporate the ideas behind manual inspection to an automatic score system, which could provide objective and reproducible assessment of structure models. Results: Inspired by our experience in CASP9 FM category assessment, we developed an automatic superimposition independent method named Quality Control Score (QCS) for structure prediction assessment. QCS captures both global and local structural features, with emphasis on global topology. We applied this method to all FM targets from CASP9, and overall the results showed the best agreement with Manual Inspection Scores among automatic prediction assessment methods previously applied in CASPs, such as Global Distance Test Total Score (GDT_TS) and Contact Score (CS). As one of the important components to guide our assessment of CASP9 FM category predictions, this method correlates well with other scoring methods and yet is able to reveal good-quality models that are missed by GDT_TS. Availability: The script for QCS calculation is available at http://prodata.swmed.edu/QCS/. Contact: grishin@chop.swmed.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994223

  11. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation

  12. On Automatic Assessment and Conceptual Understanding

    ERIC Educational Resources Information Center

    Rasila, Antti; Malinen, Jarmo; Tiitu, Hannu

    2015-01-01

    We consider two complementary aspects of mathematical skills, i.e. "procedural fluency" and "conceptual understanding," from a point of view that is related to modern e-learning environments and computer-based assessment. Pedagogical background of teaching mathematics is discussed, and it is proposed that the traditional book…

  13. Automatically Assessing Graph-Based Diagrams

    ERIC Educational Resources Information Center

    Thomas, Pete; Smith, Neil; Waugh, Kevin

    2008-01-01

    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been…

  14. Automatic Summary Assessment for Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh

    2009-01-01

    Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…

  15. On the Use of Resubmissions in Automatic Assessment Systems

    ERIC Educational Resources Information Center

    Karavirta, Ville; Korhonen, Ari; Malmi, Lauri

    2006-01-01

    Automatic assessment systems generally support immediate grading and response on learners' submissions. They also allow learners to consider the feedback, revise, and resubmit their solutions. Several strategies exist to implement the resubmission policy. The ultimate goal, however, is to improve the learning outcomes, and thus the strategies…

  16. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  17. Automatic personality assessment through social media language.

    PubMed

    Park, Gregory; Schwartz, H Andrew; Eichstaedt, Johannes C; Kern, Margaret L; Kosinski, Michal; Stillwell, David J; Ungar, Lyle H; Seligman, Martin E P

    2015-06-01

    Language use is a psychologically rich, stable individual difference with well-established correlations to personality. We describe a method for assessing personality using an open-vocabulary analysis of language from social media. We compiled the written language from 66,732 Facebook users and their questionnaire-based self-reported Big Five personality traits, and then we built a predictive model of personality based on their language. We used this model to predict the 5 personality factors in a separate sample of 4,824 Facebook users, examining (a) convergence with self-reports of personality at the domain- and facet-level; (b) discriminant validity between predictions of distinct traits; (c) agreement with informant reports of personality; (d) patterns of correlations with external criteria (e.g., number of friends, political attitudes, impulsiveness); and (e) test-retest reliability over 6-month intervals. Results indicated that language-based assessments can constitute valid personality measures: they agreed with self-reports and informant reports of personality, added incremental validity over informant reports, adequately discriminated between traits, exhibited patterns of correlations with external criteria similar to those found with self-reported personality, and were stable over 6-month intervals. Analysis of predictive language can provide rich portraits of the mental life associated with traits. This approach can complement and extend traditional methods, providing researchers with an additional measure that can quickly and cheaply assess large groups of participants with minimal burden.

  18. Automatic personality assessment through social media language.

    PubMed

    Park, Gregory; Schwartz, H Andrew; Eichstaedt, Johannes C; Kern, Margaret L; Kosinski, Michal; Stillwell, David J; Ungar, Lyle H; Seligman, Martin E P

    2015-06-01

    Language use is a psychologically rich, stable individual difference with well-established correlations to personality. We describe a method for assessing personality using an open-vocabulary analysis of language from social media. We compiled the written language from 66,732 Facebook users and their questionnaire-based self-reported Big Five personality traits, and then we built a predictive model of personality based on their language. We used this model to predict the 5 personality factors in a separate sample of 4,824 Facebook users, examining (a) convergence with self-reports of personality at the domain- and facet-level; (b) discriminant validity between predictions of distinct traits; (c) agreement with informant reports of personality; (d) patterns of correlations with external criteria (e.g., number of friends, political attitudes, impulsiveness); and (e) test-retest reliability over 6-month intervals. Results indicated that language-based assessments can constitute valid personality measures: they agreed with self-reports and informant reports of personality, added incremental validity over informant reports, adequately discriminated between traits, exhibited patterns of correlations with external criteria similar to those found with self-reported personality, and were stable over 6-month intervals. Analysis of predictive language can provide rich portraits of the mental life associated with traits. This approach can complement and extend traditional methods, providing researchers with an additional measure that can quickly and cheaply assess large groups of participants with minimal burden. PMID:25365036

  19. A framework for automatic information quality ranking of diabetes websites.

    PubMed

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  20. Towards A Clinical Tool For Automatic Intelligibility Assessment

    PubMed Central

    Berisha, Visar; Utianski, Rene; Liss, Julie

    2014-01-01

    An important, yet under-explored, problem in speech processing is the automatic assessment of intelligibility for pathological speech. In practice, intelligibility assessment is often done through subjective tests administered by speech pathologists; however research has shown that these tests are inconsistent, costly, and exhibit poor reliability. Although some automatic methods for intelligibility assessment for telecommunications exist, research specific to pathological speech has been limited. Here, we propose an algorithm that captures important multi-scale perceptual cues shown to correlate well with intelligibility. Nonlinear classifiers are trained at each time scale and a final intelligibility decision is made using ensemble learning methods from machine learning. Preliminary results indicate a marked improvement in intelligibility assessment over published baseline results. PMID:25004985

  1. [Automatic milking systems--quality assurance of milk for drinking].

    PubMed

    Redetzky, R; Hamann, J

    2004-07-01

    German consumers trust the safety and the quality of milk and milk products. Compared with other animal products, e. g. meat and meat products, their confidence is justified in so far as milk and milk products cause only few foodborne diseases in Germany, although 80 percent of all German cows develop at least one case of mastitis per lactation. Due to financial reasons, more and more German dairy farmers are forced to initiate a time-saving rationalization of their workflow. Therefore, automatic milking systems (AMS) are used increasingly, even though the costs of purchase result in a slow pick-up in sales. Moreover, AMS do not comply with legal requirements. Thus, an additional regular, the so called "catalogue of measures", had to be enacted to ensure the hygienic harmlessness of milk produced by AMS. This is the first time, that udder health at individual cow level was related to milk quality beyond merely clinical signs. Together with technical innovations for an improved health monitoring at cow and herd level as well as the implementation of quality assurance programs based on prevention, this improvement offers good prospects to produce not only a hygienically harmless, but also a physiologically composed milk and therefore a qualitatively high-grade milk from healthy cows. Being the vehicle of the most recent improvements in technology, AMS do have the potential to make a crucial contribution to this development.

  2. AUTOMATISM.

    PubMed

    MCCALDON, R J

    1964-10-24

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed "automatism". Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of "automatism".

  3. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. PMID:24148491

  4. Infrared machine vision system for the automatic detection of olive fruit quality.

    PubMed

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements.

  5. Portfolio Assessment and Quality Teaching

    ERIC Educational Resources Information Center

    Kim, Youb; Yazdian, Lisa Sensale

    2014-01-01

    Our article focuses on using portfolio assessment to craft quality teaching. Extant research literature on portfolio assessment suggests that the primary purpose of assessment is to serve learning, and portfolio assessments facilitate the process of making linkages among assessment, curriculum, and student learning (Asp, 2000; Bergeron, Wermuth,…

  6. MRI-Guided Target Motion Assessment using Dynamic Automatic Segmentation

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.

    Motion significantly impacts the radiotherapy process and represents one of the persisting problems in treatment delivery. In order to improve motion management techniques and implement future image guided radiotherapy tools such as MRI-guidance, automatic segmentation algorithms hold great promise. Such algorithms are attractive due to their direct measurement accuracy, speed, and ability to assess motion trajectories for daily treatment plan modifications. We developed and optimized an automatic segmentation technique to enable target tracking using MR cines, 4D-MRI, and 4D-CT. This algorithm overcomes weaknesses in automatic contouring such as lack of image contrast, subjectivity, slow speed, and lack of differentiating feature vectors by the use of morphological processing. The software is enhanced with predictive parameter capabilities and dynamic processing. The 4D-MRI images are acquired by applying a retrospective phase binning approach to radially-acquired MR image projections. The quantification of motion is validated with a motor phantom undergoing a known trajectory in 4D-CT, 4D-MRI, and in MR cines from the ViewRay MR-Guided RT system. In addition, a clinical case study demonstrates wide-reaching implications of the software to segment lesions in the brain and lung as well as critical structures such as the liver. Auto-segmentation results from MR cines of canines correlate well with manually drawn contours, both in terms of Dice similarity coefficient and agreement of extracted motion trajectories.

  7. The Southeast Stream Quality Assessment

    USGS Publications Warehouse

    Van Metre, Peter C.; Journey, Celeste A.

    2014-01-01

    In 2014, the U.S. Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) is assessing stream quality across the Piedmont and southern Appalachian Mountains in the southeastern United States. The goal of the Southeast Stream Quality Assessment (SESQA) is to characterize multiple water-quality factors that are stressors to aquatic life—contaminants, nutrients, sediment, and streamflow alteration—and the relation of these stressors to ecological conditions in streams throughout the region. Findings will provide communities and policymakers with information on which human and environmental factors are the most critical in controlling stream quality and, thus, provide insights about possible approaches to protect or improve stream quality. The SESQA study will be the second regional study by the NAWQA program, and it will be of similar design and scope as the Midwest Stream Quality Assessment conducted in 2013 (Van Metre and others, 2012).

  8. Solar Radiation Empirical Quality Assessment

    1994-03-01

    The SERIQC1 subroutine performs quality assessment of one, two, or three-component solar radiation data (global horizontal, direct normal, and diffuse horizontal) obtained from one-minute to one-hour integrations. Included in the package is the QCFIT tool to derive expected values from historical data, and the SERIQC1 subroutine to assess the quality of measurement data.

  9. A new quality assessment and improvement system for print media

    NASA Astrophysics Data System (ADS)

    Liu, Mohan; Konya, Iuliu; Nandzik, Jan; Flores-Herr, Nicolas; Eickeler, Stefan; Ndjiki-Nya, Patrick

    2012-12-01

    Print media collections of considerable size are held by cultural heritage organizations and will soon be subject to digitization activities. However, technical content quality management in digitization workflows strongly relies on human monitoring. This heavy human intervention is cost intensive and time consuming, which makes automization mandatory. In this article, a new automatic quality assessment and improvement system is proposed. The digitized source image and color reference target are extracted from the raw digitized images by an automatic segmentation process. The target is evaluated by a reference-based algorithm. No-reference quality metrics are applied to the source image. Experimental results are provided to illustrate the performance of the proposed system. We show that it features a good performance in the extraction as well as in the quality assessment step compared to the state-of-the-art. The impact of efficient and dedicated quality assessors on the optimization step is extensively documented.

  10. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  11. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  12. Automatic portion estimation and visual refinement in mobile dietary assessment

    PubMed Central

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2011-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198

  13. Assessment of automatic ligand building in ARP/wARP

    PubMed Central

    Evrard, Guillaume X.; Langer, Gerrit G.; Perrakis, Anastassis; Lamzin, Victor S.

    2007-01-01

    The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein–ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement. PMID:17164533

  14. Quality assessment of urban environment

    NASA Astrophysics Data System (ADS)

    Ovsiannikova, T. Y.; Nikolaenko, M. N.

    2015-01-01

    This paper is dedicated to the research applicability of quality management problems of construction products. It is offered to expand quality management borders in construction, transferring its principles to urban systems as economic systems of higher level, which qualitative characteristics are substantially defined by quality of construction product. Buildings and structures form spatial-material basis of cities and the most important component of life sphere - urban environment. Authors justify the need for the assessment of urban environment quality as an important factor of social welfare and life quality in urban areas. The authors suggest definition of a term "urban environment". The methodology of quality assessment of urban environment is based on integrated approach which includes the system analysis of all factors and application of both quantitative methods of assessment (calculation of particular and integrated indicators) and qualitative methods (expert estimates and surveys). The authors propose the system of indicators, characterizing quality of the urban environment. This indicators fall into four classes. The authors show the methodology of their definition. The paper presents results of quality assessment of urban environment for several Siberian regions and comparative analysis of these results.

  15. Quality Assessment in Action.

    ERIC Educational Resources Information Center

    Hawk, Thomas R.

    In 1985, an ad hoc committee was appointed to conduct a comprehensive examination of the educational effectiveness of the Community College of Philadelphia (CCP). The principles governing the assessment emphasized students' educational goals; cognitive and non-cognitive outcomes; differences among subgroups within the student population;…

  16. Automatic graphene transfer system for improved material quality and efficiency.

    PubMed

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-02-10

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications.

  17. Automatic graphene transfer system for improved material quality and efficiency.

    PubMed

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  18. Automatic graphene transfer system for improved material quality and efficiency

    PubMed Central

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-01-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260

  19. Automatic graphene transfer system for improved material quality and efficiency

    NASA Astrophysics Data System (ADS)

    Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando

    2016-02-01

    In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications.

  20. The Northeast Stream Quality Assessment

    USGS Publications Warehouse

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  1. SU-E-J-155: Automatic Quantitative Decision Making Metric for 4DCT Image Quality

    SciTech Connect

    Kiely, J Blanco; Olszanski, A; Both, S; White, B; Low, D

    2015-06-15

    Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume, the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase

  2. A routine quality assurance test for CT automatic exposure control systems.

    PubMed

    Iball, Gareth R; Moore, Alexis C; Crawford, Elizabeth J

    2016-01-01

    The study purpose was to develop and validate a quality assurance test for CT automatic exposure control (AEC) systems based on a set of nested polymethylmethacrylate CTDI phantoms. The test phantom was created by offsetting the 16 cm head phantom within the 32 cm body annulus, thus creating a three part phantom. This was scanned at all acceptance, routine, and some nonroutine quality assurance visits over a period of 45 months, resulting in 115 separate AEC tests on scanners from four manufacturers. For each scan the longitudinal mA modulation pattern was generated and measurements of image noise were made in two annular regions of interest. The scanner displayed CTDIvol and DLP were also recorded. The impact of a range of AEC configurations on dose and image quality were assessed at acceptance testing. For systems that were tested more than once, the percentage of CTDIvol values exceeding 5%, 10%, and 15% deviation from baseline was 23.4%, 12.6%, and 8.1% respectively. Similarly, for the image noise data, deviations greater than 2%, 5%, and 10% from baseline were 26.5%, 5.9%, and 2%, respectively. The majority of CTDIvol and noise deviations greater than 15% and 5%, respectively, could be explained by incorrect phantom setup or protocol selection. Barring these results, CTDIvol deviations of greater than 15% from baseline were found in 0.9% of tests and noise deviations greater than 5% from baseline were found in 1% of tests. The phantom was shown to be sensitive to changes in AEC setup, including the use of 3D, longitudinal or rotational tube current modulation. This test methodology allows for continuing performance assessment of CT AEC systems, and we recommend that this test should become part of routine CT quality assurance programs. Tolerances of ± 15% for CTDIvol and ± 5% for image noise relative to baseline values should be used. PMID:27455490

  3. Data Quality Assessment for Maritime Situation Awareness

    NASA Astrophysics Data System (ADS)

    Iphar, C.; Napoli, A.; Ray, C.

    2015-08-01

    The Automatic Identification System (AIS) initially designed to ensure maritime security through continuous position reports has been progressively used for many extended objectives. In particular it supports a global monitoring of the maritime domain for various purposes like safety and security but also traffic management, logistics or protection of strategic areas, etc. In this monitoring, data errors, misuse, irregular behaviours at sea, malfeasance mechanisms and bad navigation practices have inevitably emerged either by inattentiveness or voluntary actions in order to circumvent, alter or exploit such a system in the interests of offenders. This paper introduces the AIS system and presents vulnerabilities and data quality assessment for decision making in maritime situational awareness cases. The principles of a novel methodological approach for modelling, analysing and detecting these data errors and falsification are introduced.

  4. Assessing the performance of a covert automatic target recognition algorithm

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2005-05-01

    Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.

  5. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  6. SIMULATING LOCAL DENSE AREAS USING PMMA TO ASSESS AUTOMATIC EXPOSURE CONTROL IN DIGITAL MAMMOGRAPHY.

    PubMed

    Bouwman, R W; Binst, J; Dance, D R; Young, K C; Broeders, M J M; den Heeten, G J; Veldkamp, W J H; Bosmans, H; van Engen, R E

    2016-06-01

    Current digital mammography (DM) X-ray systems are equipped with advanced automatic exposure control (AEC) systems, which determine the exposure factors depending on breast composition. In the supplement of the European guidelines for quality assurance in breast cancer screening and diagnosis, a phantom-based test is included to evaluate the AEC response to local dense areas in terms of signal-to-noise ratio (SNR). This study evaluates the proposed test in terms of SNR and dose for four DM systems. The glandular fraction represented by the local dense area was assessed by analytic calculations. It was found that the proposed test simulates adipose to fully glandular breast compositions in attenuation. The doses associated with the phantoms were found to match well with the patient dose distribution. In conclusion, after some small adaptations, the test is valuable for the assessment of the AEC performance in terms of both SNR and dose. PMID:26977073

  7. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  8. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-04-03

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction.

  9. Network Design and Quality Checks in Automatic Orientation of Close-Range Photogrammetric Blocks

    PubMed Central

    Dall’Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  10. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-01-01

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction. PMID:25855036

  11. Assessing risks to ecosystem quality

    SciTech Connect

    Barnthouse, L.W.

    1995-12-31

    Ecosystems are not organisms. Because ecosystems do not reproduce, grow old or sick, and die, the term ecosystem health is somewhat misleading and perhaps should not be used. A more useful concept is ``ecosystem quality,`` which denotes a set of desirable ecosystem characteristics defined in terms of species composition, productivity, size/condition of specific populations, or other measurable properties. The desired quality of an ecosystem may be pristine, as in a nature preserve, or highly altered by man, as in a managed forest or navigational waterway. ``Sustainable development`` implies that human activities that influence ecosystem quality should be managed so that high-quality ecosystems are maintained for future generations. In sustainability-based environmental management, the focus is on maintaining or improving ecosystem quality, not on restricting discharges or requiring particular waste treatment technologies. This approach requires management of chemical impacts to be integrated with management of other sources of stress such as erosion, eutrophication, and direct human exploitation. Environmental scientists must (1) work with decision makers and the public to define ecosystem quality goals, (2) develop corresponding measures of ecosystem quality, (3) diagnose causes for departures from desired states, and (4) recommend appropriate restoration actions, if necessary. Environmental toxicology and chemical risk assessment are necessary for implementing the above framework, but they are clearly not sufficient. This paper reviews the state-of-the science relevant to sustaining the quality of aquatic ecosystems. Using the specific example of a reservoir in eastern Tennessee, the paper attempts to define roles for ecotoxicology and risk assessment in each step of the management process.

  12. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  13. The Midwest Stream Quality Assessment

    USGS Publications Warehouse

    ,

    2012-01-01

    In 2013, the U.S. Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) and USGS Columbia Environmental Research Center (CERC) will be collaborating with the U.S. Environmental Protection Agency (EPA) National Rivers and Streams Assessment (NRSA) to assess stream quality across the Midwestern United States. The sites selected for this study are a subset of the larger NRSA, implemented by the EPA, States and Tribes to sample flowing waters across the United States (http://water.epa.gov/type/rsl/monitoring/riverssurvey/index.cfm). The goals are to characterize water-quality stressors—contaminants, nutrients, and sediment—and ecological conditions in streams throughout the Midwest and to determine the relative effects of these stressors on aquatic organisms in the streams. Findings will contribute useful information for communities and policymakers by identifying which human and environmental factors are the most critical in controlling stream quality. This collaborative study enhances information provided to the public and policymakers and minimizes costs by leveraging and sharing data gathered under existing programs. In the spring and early summer, NAWQA will sample streams weekly for contaminants, nutrients, and sediment. During the same time period, CERC will test sediment and water samples for toxicity, deploy time-integrating samplers, and measure reproductive effects and biomarkers of contaminant exposure in fish or amphibians. NRSA will sample sites once during the summer to assess ecological and habitat conditions in the streams by collecting data on algal, macroinvertebrate, and fish communities and collecting detailed physical-habitat measurements. Study-team members from all three programs will work in collaboration with USGS Water Science Centers and State agencies on study design, execution of sampling and analysis, and reporting.

  14. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  15. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  16. Assessing the Development of Automaticity in Second Language Word Recognition.

    ERIC Educational Resources Information Center

    Segalowitz, Sidney J.; Segalowitz, Norman S.; Wood, Anthony G.

    1998-01-01

    In a study of development of automaticity in second-language word recognition, 105 English-speakers speaking French performed multiple lexical-decision tasks, and differences in coefficient of variation of lexical decision reaction time were compared cross- sectionally and longitudinally. Results confirm that with extended learning experience, the…

  17. Orion Entry Handling Qualities Assessments

    NASA Technical Reports Server (NTRS)

    Bihari, B.; Tiggers, M.; Strahan, A.; Gonzalez, R.; Sullivan, K.; Stephens, J. P.; Hart, J.; Law, H., III; Bilimoria, K.; Bailey, R.

    2011-01-01

    The Orion Command Module (CM) is a capsule designed to bring crew back from the International Space Station (ISS), the moon and beyond. The atmospheric entry portion of the flight is deigned to be flown in autopilot mode for nominal situations. However, there exists the possibility for the crew to take over manual control in off-nominal situations. In these instances, the spacecraft must meet specific handling qualities criteria. To address these criteria two separate assessments of the Orion CM s entry Handling Qualities (HQ) were conducted at NASA s Johnson Space Center (JSC) using the Cooper-Harper scale (Cooper & Harper, 1969). These assessments were conducted in the summers of 2008 and 2010 using the Advanced NASA Technology Architecture for Exploration Studies (ANTARES) six degree of freedom, high fidelity Guidance, Navigation, and Control (GN&C) simulation. This paper will address the specifics of the handling qualities criteria, the vehicle configuration, the scenarios flown, the simulation background and setup, crew interfaces and displays, piloting techniques, ratings and crew comments, pre- and post-fight briefings, lessons learned and changes made to improve the overall system performance. The data collection tools, methods, data reduction and output reports will also be discussed. The objective of the 2008 entry HQ assessment was to evaluate the handling qualities of the CM during a lunar skip return. A lunar skip entry case was selected because it was considered the most demanding of all bank control scenarios. Even though skip entry is not planned to be flown manually, it was hypothesized that if a pilot could fly the harder skip entry case, then they could also fly a simpler loads managed or ballistic (constant bank rate command) entry scenario. In addition, with the evaluation set-up of multiple tasks within the entry case, handling qualities ratings collected in the evaluation could be used to assess other scenarios such as the constant bank angle

  18. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  19. Automatic quality classification of entire electrocardiographic recordings obtained with a novel patch type recorder.

    PubMed

    Saadi, Dorthe B; Hoppe, Karsten; Egstrup, Kenneth; Jennum, Poul; Iversen, Helle K; Jeppesen, Jørgen L; Sorensen, Helge B D

    2014-01-01

    Recently, new patch type electrocardiogram (ECG) recorders have reached the market. These new devices possess a number of advantages compared to the traditional Holter recorders. This forms the basis of questions related to benefits and drawbacks of different ambulatory ECG recording techniques. One of the important questions is the ability to obtain high clinical quality of the recordings during the entire monitoring period. It is thus desirable to be able to obtain an automatic estimate of the global quality of entire ECG recordings. The purpose of this pilot study is therefore to design an algorithm for automatic classification of entire ECG recordings into the groups "noisy" and "clean" recordings. This novel algorithm is based on three features and a simple Bayes classifier. The algorithm was tested on 40 ECG recordings in a five-fold cross validation scheme and it obtained an average accuracy of 90% on the test data.

  20. Carbon Nanotube Material Quality Assessment

    NASA Technical Reports Server (NTRS)

    Yowell, Leonard; Arepalli, Sivaram; Sosa, Edward; Niolaev, Pavel; Gorelik, Olga

    2006-01-01

    The nanomaterial activities at NASA Johnson Space Center focus on carbon nanotube production, characterization and their applications for aerospace systems. Single wall carbon nanotubes are produced by arc and laser methods. Characterization of the nanotube material is performed using the NASA JSC protocol developed by combining analytical techniques of SEM, TEM, UV-VIS-NIR absorption, Raman, and TGA. A possible addition of other techniques such as XPS, and ICP to the existing protocol will be discussed. Changes in the quality of the material collected in different regions of the arc and laser production chambers is assessed using the original JSC protocol. The observed variations indicate different growth conditions in different regions of the production chambers.

  1. Hand radiograph analysis for fully automatic bone age assessment

    NASA Astrophysics Data System (ADS)

    Chassignet, Philippe; Nitescu, Teodor; Hassan, Max; Stanescu, Ruxandra

    1999-05-01

    This paper describes a method for the fully automatic and reliable segmentation of the bones in a radiograph of the child's hand. The problem consists in identifying the contours of the bones and the difficulty lies in the large variability of the anatomical structures, according to age, hand pose or individual. The model shall not force any standard interpretation, hence we use a simple hierarchical geometric model that provides the only information required for the identification of the chunks of contours. The phalangeal and metacarpal resulting segmentation is proved robust over a set of many hundred of images and measurements of shapes, sizes, areas, ..., are now quite allowed. The next step consists in extending the model for more accurate measurements and also for the localization of the carpal bones.

  2. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    ERIC Educational Resources Information Center

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of…

  3. Towards Quality Assessment in an EFL Programme

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi; Al Ajmi, Ahmed Ali Saleh

    2013-01-01

    Assessment is central in education and the teaching-learning process. This study attempts to explore the perspectives and views about quality assessment among teachers of English as a Foreign Language (EFL), and to find ways of promoting quality assessment. Quantitative methodology was used to collect data. To answer the study questions, a…

  4. Healthcare quality maturity assessment model based on quality drivers.

    PubMed

    Ramadan, Nadia; Arafeh, Mazen

    2016-04-18

    Purpose - Healthcare providers differ in their readiness and maturity levels regarding quality and quality management systems applications. The purpose of this paper is to serve as a useful quantitative quality maturity-level assessment tool for healthcare organizations. Design/methodology/approach - The model proposes five quality maturity levels (chaotic, primitive, structured, mature and proficient) based on six quality drivers: top management, people, operations, culture, quality focus and accreditation. Findings - Healthcare managers can apply the model to identify the status quo, quality shortcomings and evaluating ongoing progress. Practical implications - The model has been incorporated in an interactive Excel worksheet that visually displays the quality maturity-level risk meter. The tool has been applied successfully to local hospitals. Originality/value - The proposed six quality driver scales appear to measure healthcare provider maturity levels on a single quality meter. PMID:27120510

  5. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  6. Automatic quality improvement reports in the intensive care unit: One step closer toward meaningful use

    PubMed Central

    Dziadzko, Mikhail A; Thongprayoon, Charat; Ahmed, Adil; Tiong, Ing C; Li, Man; Brown, Daniel R; Pickering, Brian W; Herasevich, Vitaly

    2016-01-01

    AIM: To examine the feasibility and validity of electronic generation of quality metrics in the intensive care unit (ICU). METHODS: This minimal risk observational study was performed at an academic tertiary hospital. The Critical Care Independent Multidisciplinary Program at Mayo Clinic identified and defined 11 key quality metrics. These metrics were automatically calculated using ICU DataMart, a near-real time copy of all ICU electronic medical record (EMR) data. The automatic report was compared with data from a comprehensive EMR review by a trained investigator. Data was collected for 93 randomly selected patients admitted to the ICU during April 2012 (10% of admitted adult population). This study was approved by the Mayo Clinic Institution Review Board. RESULTS: All types of variables needed for metric calculations were found to be available for manual and electronic abstraction, except information for availability of free beds for patient-specific time-frames. There was 100% agreement between electronic and manual data abstraction for ICU admission source, admission service, and discharge disposition. The agreement between electronic and manual data abstraction of the time of ICU admission and discharge were 99% and 89%. The time of hospital admission and discharge were similar for both the electronically and manually abstracted datasets. The specificity of the electronically-generated report was 93% and 94% for invasive and non-invasive ventilation use in the ICU. One false-positive result for each type of ventilation was present. The specificity for ICU and in-hospital mortality was 100%. Sensitivity was 100% for all metrics. CONCLUSION: Our study demonstrates excellent accuracy of electronically-generated key ICU quality metrics. This validates the feasibility of automatic metric generation. PMID:27152259

  7. Using full-reference image quality metrics for automatic image sharpening

    NASA Astrophysics Data System (ADS)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  8. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  9. Operation logic and functionality of automatic dose rate and image quality control of conventional fluoroscopy

    SciTech Connect

    Lin, Pei-Jan Paul

    2009-05-15

    New generation of fluoroscopic imaging systems is equipped with spectral shaping filters complemented with sophisticated automatic dose rate and image quality control logic called ''fluoroscopy curve'' or ''trajectory''. Such fluoroscopy curves were implemented first on cardiovascular angiographic imaging systems and are now available on conventional fluoroscopy equipment. This study aims to investigate the control logic operations under the fluoroscopy mode and acquisition mode (equivalent to the legacy spot filming) of a conventional fluoroscopy system typically installed for upper-lower gastrointestinal examinations, interventional endoscopy laboratories, gastrointestinal laboratory, and pain clinics.

  10. Automatic Severity Assessment of Dysarthria using State-Specific Vectors.

    PubMed

    Sriranjani, R; Umesh, S; Reddy, M Ramasubba

    2015-01-01

    In this paper, a novel approach to assess the severity of the dysarthria using state-specific vector (SSV) of phone-cluster adaptive training (phone-CAT) acoustic modeling technique is proposed. The dominant component of the SSV represents the actual pronunciations of a speaker. Comparing the dominant component for unimpaired and each dysarthric speaker, a phone confusion matrix is formed. The diagonal elements of the matrix capture the number of correct pronunciations for each dysarthric speaker. As the degree of impairment increases, the number of phones correctly pronounced by the speaker decreases. Thus the trace of the confusion matrix can be used as objective cue to assess di?erent severity levels of dysarthria based on a threshold rule. Our proposed objective measure correlates with the standard Frenchay dysarthric assessment scores by 74 % on Nemours database. The measure also correlates with the intelligibility scores by 82 % on universal access dysarthric speech database. PMID:25996705

  11. Assessing Mathematics Automatically Using Computer Algebra and the Internet

    ERIC Educational Resources Information Center

    Sangwin, Chris

    2004-01-01

    This paper reports some recent developments in mathematical computer-aided assessment which employs computer algebra to evaluate students' work using the Internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university…

  12. Effects of Multisensory Environments on Stereotyped Behaviours Assessed as Maintained by Automatic Reinforcement

    ERIC Educational Resources Information Center

    Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio

    2012-01-01

    Background: The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Method: Stereotyped and engaged behaviours of two young people with severe…

  13. Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775

    ERIC Educational Resources Information Center

    Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard

    2010-01-01

    Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic" performance…

  14. Assessing the Quality of Teachers' Teaching Practices

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Stephen; Staniszewski, Christina; Upton, Ashley; Valley, Megan

    2012-01-01

    This study assessed the extent to which nine elementary physical education teachers implemented the quality of teaching practices. Thirty physical education lessons taught by the nine teachers to their students in grades K-5 were videotaped. Four investigators coded the taped lessons using the Assessing Quality Teaching Rubric (AQTR) designed and…

  15. Assessing Quality in Home Visiting Programs

    ERIC Educational Resources Information Center

    Korfmacher, Jon; Laszewski, Audrey; Sparr, Mariel; Hammel, Jennifer

    2013-01-01

    Defining quality and designing a quality assessment measure for home visitation programs is a complex and multifaceted undertaking. This article summarizes the process used to create the Home Visitation Program Quality Rating Tool (HVPQRT) and identifies next steps for its development. The HVPQRT measures both structural and dynamic features of…

  16. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. PMID:27268736

  17. SERI QC Solar Data Quality Assessment Software

    1994-12-31

    SERI QC is a mathematical software package that assesses the quality of solar radiation data. The SERI QC software is a function written in the C programming language. IT IS NOT A STANDALONE SOFTWARE APPLICATION. The user must write the calling application that requires quality assessment of solar data. The C function returns data quality flags to the calling program. A companion program, QCFIT, is a standalone Windows application that provides support files for themore » SERI QC function (data quality boundaries). The QCFIT software can also be used as an analytical tool for visualizing solar data quality independent of the SERI QC function.« less

  18. A convolutional neural network approach for objective video quality assessment.

    PubMed

    Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique

    2006-09-01

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on

  19. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  20. Towards Automatic Diabetes Case Detection and ABCS Protocol Compliance Assessment

    PubMed Central

    Mishra, Ninad K.; Son, Roderick Y.; Arnzen, James J.

    2012-01-01

    Objective According to the American Diabetes Association, the implementation of the standards of care for diabetes has been suboptimal in most clinical settings. Diabetes is a disease that had a total estimated cost of $174 billion in 2007 for an estimated diabetes-affected population of 17.5 million in the United States. With the advent of electronic medical records (EMR), tools to analyze data residing in the EMR for healthcare surveillance can help reduce the burdens experienced today. This study was primarily designed to evaluate the efficacy of employing clinical natural language processing to analyze discharge summaries for evidence indicating a presence of diabetes, as well as to assess diabetes protocol compliance and high risk factors. Methods Three sets of algorithms were developed to analyze discharge summaries for: (1) identification of diabetes, (2) protocol compliance, and (3) identification of high risk factors. The algorithms utilize a common natural language processing framework that extracts relevant discourse evidence from the medical text. Evidence utilized in one or more of the algorithms include assertion of the disease and associated findings in medical text, as well as numerical clinical measurements and prescribed medications. Results The diabetes classifier was successful at classifying reports for the presence and absence of diabetes. Evaluated against 444 discharge summaries, the classifier’s performance included macro and micro F-scores of 0.9698 and 0.9865, respectively. Furthermore, the protocol compliance and high risk factor classifiers showed promising results, with most F-measures exceeding 0.9. Conclusions The presented approach accurately identified diabetes in medical discharge summaries and showed promise with regards to assessment of protocol compliance and high risk factors. Utilizing free-text analytic techniques on medical text can complement clinical-public health decision support by identifying cases and high risk

  1. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  2. Automatic microfiber filtration (AMF) of surface water: impact on water quality and biofouling evolution.

    PubMed

    Lakretz, Anat; Elifantz, Hila; Kviatkovski, Igor; Eshel, Gonen; Mamane, Hadas

    2014-01-01

    In the current study we examined the impact of thread filtration using an automatic microfiber filter on Lake Kinneret water quality and as a new application to control biofouling over time. We found that automatic microfiber filtration (AMF) reduced total iron and aluminum in water by over 80%. Particle analysis (>2 μm) revealed a total particle removal efficiency of ≈ 90%, with AMF removal efficiency increasing with increasing particle size and decreasing particle circularity. Regarding microbiological parameters, AMF did not affect bacterial counts or composition in the water. However, it did control biofilm evolution and affected its microbial community composition. AMF controlled biofilm over time by maintaining premature biofilms of less than 10 μm mean thickness compared to biofilms of unfiltered water (up to 60 μm mean thickness). In addition, biofilms developing in AMF filtered water contained relatively low levels of extracellular polymeric substances. While biofilms of unfiltered water were dominated by Proteobacteria (≤ 50%) followed by Bacteroidetes (20-30%) during all 4 weeks of the experiment, biofilms of AMF filtered water were dominated by Proteobacteria (≤ 90%) and especially Alphaproteobacteria after 2 weeks, and Chloroflexi (≈ 60%) after 4 weeks. The decrease in Bacteroidetes might originate from removal of transparent exopolymer particles, which are occasionally colonized by Bacteroidetes. The increase in Alphaproteobacteria and Chloroflexi was explained by these robust groups' ability to adjust to different environments.

  3. Statistical quality assessment of a fingerprint

    NASA Astrophysics Data System (ADS)

    Hwang, Kyungtae

    2004-08-01

    The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.

  4. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  5. Tools to assess tissue quality.

    PubMed

    Neumeister, Veronique M

    2014-03-01

    Biospecimen science has recognized the importance of tissue quality for accurate molecular and biomarker analysis and efforts are made to standardize tissue procurement, processing and storage conditions of tissue samples. At the same time the field has emphasized the lack of standardization of processes between different laboratories, the variability inherent in the analytical phase and the lack of control over the pre-analytical phase of tissue processing. The problem extends back into tissue samples in biorepositories, which are often decades old and where documentation about tissue processing might not be available. This review highlights pre-analytical variations in tissue handling, processing, fixation and storage and emphasizes the effects of these variables on nucleic acids and proteins in harvested tissue. Finally current tools for quality control regarding molecular or biomarker analysis are summarized and discussed.

  6. Mobile sailing robot for automatic estimation of fish density and monitoring water quality

    PubMed Central

    2013-01-01

    Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984

  7. Assessing quality in Earth Science Education

    NASA Astrophysics Data System (ADS)

    Rollinson, Hugh

    1999-05-01

    Quality is an elusive concept — hard to define, but you recognise it when you come across it. This paper reviews the meaning of quality as applied in Higher Education and shows that there are, of necessity, a number of workable definitions of quality in Higher Education. The assessment of quality in Earth Science Higher Education in England during 1994-1995 is described. A number of general features of quality in Earth Sciences Education are drawn from this case study and the future direction of quality assurance is mapped. Three principles drawn from the definitions of quality and from the English teaching quality assessment exercise are applied to Earth Science Education in Africa. It is argued that different definitions of quality will apply in different societal contexts in Africa and that these may be used to shape the relevance of Geoscience Education. Increasing mobility of labour means that comparability of academic standards between African countries within a region is desirable and should be worked for. Finally, research in the UK shows that teaching quality is not necessarily dependent upon the size or research potential of a department, indicating that Africa can deliver high quality Earth Science Education.

  8. Quality Assessment in the Blog Space

    ERIC Educational Resources Information Center

    Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan

    2010-01-01

    Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…

  9. Quality Assessment for a University Curriculum.

    ERIC Educational Resources Information Center

    Hjalmered, Jan-Olof; Lumsden, Kenth

    1994-01-01

    In 1992, a national quality assessment report covering courses in all the Swedish schools of mechanical engineering was presented. This article comments on the general ideas and specific proposals presented, and offers an analysis of the consequences. Presents overall considerations regarding quality issues, the philosophy behind the new…

  10. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  11. Biosignal Analysis to Assess Mental Stress in Automatic Driving of Trucks: Palmar Perspiration and Masseter Electromyography

    PubMed Central

    Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro

    2015-01-01

    Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768

  12. An Approach towards Software Quality Assessment

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Kumar, Krishan

    Software engineer needs to determine the real purpose of the software, which is a prime point to keep in mind: The customer’s needs come first, and they include particular levels of quality, not just functionality. Thus, the software engineer has a responsibility to elicit quality requirements that may not even be explicit at the outset and to discuss their importance and the difficulty of attaining them. All processes associated with software quality (e.g. building, checking, improving quality) will be designed with these in mind and carry costs based on the design. Therefore, it is important to have in mind some of the possible attributes of quality. We start by identifying the metrics and measurement approaches that can be used to assess the quality of software product. Most of them can be measured subjectively because there is no solid statistics regarding them. Here, in this paper we propose an approach to measure the software quality statistically.

  13. The Quality Assessment Index (QAI) for measuring nursing home quality.

    PubMed Central

    Gustafson, D H; Sainfort, F C; Van Konigsveld, R; Zimmerman, D R

    1990-01-01

    There have been few detailed evaluations of measures of quality of care in nursing homes. This is unfortunate because it has meant that much research on factors affecting nursing home quality has used measures of questionable reliability and validity. Moreover, some measures currently in use have been developed using methodologies not based on solid conceptual grounds, offering little reason to expect them to have much internal or external validity. In this article we suggest characteristics that should be present in measures of nursing home quality, propose a methodology for the development of such measures, propose a specific nursing home quality measure (the Quality Assessment Index or QAI), and report the results of several tests of its validity and reliability. PMID:2184147

  14. ANSS Backbone Station Quality Assessment

    NASA Astrophysics Data System (ADS)

    Leeds, A.; McNamara, D.; Benz, H.; Gee, L.

    2006-12-01

    In this study we assess the ambient noise levels of the broadband seismic stations within the United States Geological Survey's (USGS) Advanced National Seismic System (ANSS) backbone network. The backbone consists of stations operated by the USGS as well as several regional network stations operated by universities. We also assess the improved detection capability of the network due to the installation of 13 additional backbone stations and the upgrade of 26 existing stations funded by the Earthscope initiative. This assessment makes use of probability density functions (PDF) of power spectral densities (PSD) (after McNamara and Buland, 2004) computed by a continuous noise monitoring system developed by the USGS- ANSS and the Incorporated Research Institutions in Seismology (IRIS) Data Management Center (DMC). We compute the median and mode of the PDF distribution and rank the stations relative to the Peterson Low noise model (LNM) (Peterson, 1993) for 11 different period bands. The power of the method lies in the fact that there is no need to screen the data for system transients, earthquakes or general data artifacts since they map into a background probability level. Previous studies have shown that most regional stations, instrumented with short period or extended short period instruments, have a higher noise level in all period bands while stations in the US network have lower noise levels at short periods (0.0625-8.0 seconds), high frequencies (8.0- 0.125Hz). The overall network is evaluated with respect to accomplishing the design goals set for the USArray/ANSS backbone project which were intended to increase broadband performance for the national monitoring network.

  15. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  16. Water quality assessment in Ecuador

    SciTech Connect

    Chudy, J.P.; Arniella, E.; Gil, E.

    1993-02-01

    The El Tor cholera pandemic arrived in Ecuador in March 1991, and through the course of the year caused 46,320 cases, of which 692 resulted in death. Most of the cases were confined to cities along Ecuador's coast. The Water and Sanitation for Health Project (WASH), which was asked to participate in the review of this request, suggested that a more comprehensive approach should be taken to cholera control and prevention. The approach was accepted, and a multidisciplinary team consisting of a sanitary engineer, a hygiene education specialist, and an institutional specialist was scheduled to carry out the assessment in late 1992 following the national elections.

  17. Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto

    2012-02-01

    Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.

  18. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    SciTech Connect

    Scholey, J; White, B; Qi, S; Low, D

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the image intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the clinical

  19. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  20. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233

  1. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  2. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    PubMed Central

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  3. SNPflow: a lightweight application for the processing, storing and automatic quality checking of genotyping assays.

    PubMed

    Weissensteiner, Hansi; Haun, Margot; Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  4. Automatic and objective assessment of alternating tapping performance in Parkinson's disease.

    PubMed

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667

  5. Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease

    PubMed Central

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667

  6. Automatic and objective assessment of alternating tapping performance in Parkinson's disease.

    PubMed

    Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker

    2013-01-01

    This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.

  7. An algorithm used for quality criterion automatic measurement of band-pass filters and its device implementation

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Liu, Yan; Yu, Feihong

    2013-08-01

    As a kind of film device, band-pass filter is widely used in pattern recognition, infrared detection, optical fiber communication, etc. In this paper, an algorithm for automatic measurement of band-pass filter quality criterion is proposed based on the proven theory calculation of derivate spectral transmittance of filter formula. Firstly, wavelet transform to reduce spectrum data noises is used. Secondly, combining with the Gaussian curve fitting and least squares method, the algorithm fits spectrum curve and searches the peak. Finally, some parameters for judging band-pass filter quality are figure out. Based on the algorithm, a pipeline for band-pass filters automatic measurement system has been designed that can scan the filter array automatically and display spectral transmittance of each filter. At the same time, the system compares the measuring result with the user defined standards to determine if the filter is qualified or not. The qualified product will be market with green color, and the unqualified product will be marked with red color. With the experiments verification, the automatic measurement system basically realized comprehensive, accurate and rapid measurement of band-pass filter quality and achieved the expected results.

  8. Automated data quality assessment of marine sensors.

    PubMed

    Timms, Greg P; de Souza, Paulo A; Reznik, Leon; Smith, Daniel V

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data's fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert.

  9. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  10. Software quality assessment for health care systems.

    PubMed

    Braccini, G; Fabbrini, F; Fusani, M

    1997-01-01

    The problem of defining a quality model to be used in the evaluation of the software components of a Health Care System (HCS) is addressed. The model, based on the ISO/IEC 9126 standard, has been interpreted to fit the requirements of some classes of applications representative of Health Care Systems, on the basis of the experience gained both in the field of medical Informatics and assessment of software products. The values resulting from weighing the quality characteristics according to their criticality outline a set of quality profiles that can be used both for evaluation and certification.

  11. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  12. Assessing the quality of nursing work life.

    PubMed

    Brooks, Beth A; Storfjell, Judy; Omoike, Osei; Ohlson, Susan; Stemler, Irene; Shaver, Joan; Brown, Amy

    2007-01-01

    Traditionally, nursing has measured job satisfaction by focusing on employees' likes and dislikes. However, job satisfaction is an unsatisfactory construct to assess either the jobs themselves or employees' feelings about work sinceas much as 30% of the variance explained in job satisfaction surveys is a function of personality, something employers can do little to change. Based on socio-technical systems theory, quality of nursing work life (QNWL) assessments focus on identifying opportunities for nurses to improve their work and work environment while achieving the organization's goals. Moreover, some evidence suggests that improvements in work life are needed to improve productivity. Therefore, assessing QNWL reveals areas for improvement where the needs of both the employees and the organization converge. The purpose of this article was to assess the QNWL of staff nurses using Brooks' Quality of Nursing Work Life Survey. PMID:17413509

  13. Quality assurance in the production of pipe fittings by automatic laser-based material identification

    NASA Astrophysics Data System (ADS)

    Moench, Ingo; Peter, Laszlo; Priem, Roland; Sturm, Volker; Noll, Reinhard

    1999-09-01

    In plants of the chemical, nuclear and off-shore industry, application specific high-alloyed steels are used for pipe fittings. Mixing of different steel grades can lead to corrosion with severe consequential damages. Growing quality requirements and environmental responsibilities demand a 100% material control in the production of the pipe fittings. Therefore, LIFT, an automatic inspection machine, was developed to insure against any mix of material grades. LIFT is able to identify more than 30 different steel grades. The inspection method is based on Laser-Induced Breakdown Spectrometry (LIBS). An expert system, which can be easily trained and recalibrated, was developed for the data evaluation. The result of the material inspection is transferred to an external handling system via a PLC interface. The duration of the inspection process is 2 seconds. The graphical user interface was developed with respect to the requirements of an unskilled operator. The software is based on a realtime operating system and provides a safe and reliable operation. An interface for the remote maintenance by modem enables a fast operational support. Logged data are retrieved and evaluated. This is the basis for an adaptive improvement of the configuration of LIFT with respect to changing requirements in the production line. Within the first six months of routine operation, about 50000 pipe fittings were inspected.

  14. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    NASA Astrophysics Data System (ADS)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  15. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future.

  16. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future. PMID:25412885

  17. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  18. Water quality issues and energy assessments

    SciTech Connect

    Davis, M.J.; Chiu, S.

    1980-11-01

    This report identifies and evaluates the significant water quality issues related to regional and national energy development. In addition, it recommends improvements in the Office assessment capability. Handbook-style formating, which includes a system of cross-references and prioritization, is designed to help the reader use the material.

  19. An assessment model for quality management

    NASA Astrophysics Data System (ADS)

    Völcker, Chr.; Cass, A.; Dorling, A.; Zilioli, P.; Secchi, P.

    2002-07-01

    SYNSPACE together with InterSPICE and Alenia Spazio is developing an assessment method to determine the capability of an organisation in the area of quality management. The method, sponsored by the European Space Agency (ESA), is called S9kS (SPiCE- 9000 for SPACE). S9kS is based on ISO 9001:2000 with additions from the quality standards issued by the European Committee for Space Standardization (ECSS) and ISO 15504 - Process Assessments. The result is a reference model that supports the expansion of the generic process assessment framework provided by ISO 15504 to nonsoftware areas. In order to be compliant with ISO 15504, requirements from ISO 9001 and ECSS-Q-20 and Q-20-09 have been turned into process definitions in terms of Purpose and Outcomes, supported by a list of detailed indicators such as Practices, Work Products and Work Product Characteristics. In coordination with this project, the capability dimension of ISO 15504 has been revised to be consistent with ISO 9001. As contributions from ISO 9001 and the space quality assurance standards are separable, the stripped down version S9k offers organisations in all industries an assessment model based solely on ISO 9001, and is therefore interesting to all organisations, which intend to improve their quality management system based on ISO 9001.

  20. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  1. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  2. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions.

  3. Assessing uncertainty in stormwater quality modelling.

    PubMed

    Wijesiri, Buddhi; Egodawatta, Prasanna; McGree, James; Goonetilleke, Ashantha

    2016-10-15

    Designing effective stormwater pollution mitigation strategies is a challenge in urban stormwater management. This is primarily due to the limited reliability of catchment scale stormwater quality modelling tools. As such, assessing the uncertainty associated with the information generated by stormwater quality models is important for informed decision making. Quantitative assessment of build-up and wash-off process uncertainty, which arises from the variability associated with these processes, is a major concern as typical uncertainty assessment approaches do not adequately account for process uncertainty. The research study undertaken found that the variability of build-up and wash-off processes for different particle size ranges leads to processes uncertainty. After variability and resulting process uncertainties are accurately characterised, they can be incorporated into catchment stormwater quality predictions. Accounting of process uncertainty influences the uncertainty limits associated with predicted stormwater quality. The impact of build-up process uncertainty on stormwater quality predictions is greater than that of wash-off process uncertainty. Accordingly, decision making should facilitate the designing of mitigation strategies which specifically addresses variations in load and composition of pollutants accumulated during dry weather periods. Moreover, the study outcomes found that the influence of process uncertainty is different for stormwater quality predictions corresponding to storm events with different intensity, duration and runoff volume generated. These storm events were also found to be significantly different in terms of the Runoff-Catchment Area ratio. As such, the selection of storm events in the context of designing stormwater pollution mitigation strategies needs to take into consideration not only the storm event characteristics, but also the influence of process uncertainty on stormwater quality predictions. PMID:27423532

  4. Surface water quality assessment by environmetric methods.

    PubMed

    Boyacioglu, Hülya; Boyacioglu, Hayal

    2007-08-01

    This environmetric study deals with the interpretation of river water monitoring data from the basin of the Buyuk Menderes River and its tributaries in Turkey. Eleven variables were measured to estimate water quality at 17 sampling sites. Factor analysis was applied to explain the correlations between the observations in terms of underlying factors. Results revealed that, water quality was strongly affected from agricultural uses. Cluster analysis was used to classify stations with similar properties and results distinguished three groups of stations. Water quality at downstream of the river was quite different from the other part. It is recommended to involve the environmetric data treatment as a substantial procedure in assessment of water quality data.

  5. Automatic Vertebral Fracture Assessment System (AVFAS) for Spinal Pathologies Diagnosis Based on Radiograph X-Ray Images

    NASA Astrophysics Data System (ADS)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abd; Bin Abdul Hamid, Hamzaini; Ariffin, Ahmad Kamal

    Nowadays, medical imaging has become a major tool in many clinical trials. This is because the technology enables rapid diagnosis with visualization and quantitative assessment that facilitate health practitioners or professionals. Since the medical and healthcare sector is a vast industry that is very much related to every citizen's quality of life, the image based medical diagnosis has become one of the important service areas in this sector. As such, a medical diagnostic imaging (MDI) software tool for assessing vertebral fracture is being developed which we have named as AVFAS short for Automatic Vertebral Fracture Assessment System. The developed software system is capable of indexing, detecting and classifying vertebral fractures by measuring the shape and appearance of vertebrae of radiograph x-ray images of the spine. This paper describes the MDI software tool which consists of three main sub-systems known as Medical Image Training & Verification System (MITVS), Medical Image and Measurement & Decision System (MIMDS) and Medical Image Registration System (MIRS) in term of its functionality, performance, ongoing research and outstanding technical issues.

  6. Automatic brain tumour detection and neovasculature assessment with multiseries MRI analysis.

    PubMed

    Szwarc, Pawel; Kawa, Jacek; Rudzki, Marcin; Pietka, Ewa

    2015-12-01

    In this paper a novel multi-stage automatic method for brain tumour detection and neovasculature assessment is presented. First, the brain symmetry is exploited to register the magnetic resonance (MR) series analysed. Then, the intracranial structures are found and the region of interest (ROI) is constrained within them to tumour and peritumoural areas using the Fluid Light Attenuation Inversion Recovery (FLAIR) series. Next, the contrast-enhanced lesions are detected on the basis of T1-weighted (T1W) differential images before and after contrast medium administration. Finally, their vascularisation is assessed based on the Regional Cerebral Blood Volume (RCBV) perfusion maps. The relative RCBV (rRCBV) map is calculated in relation to a healthy white matter, also found automatically, and visualised on the analysed series. Three main types of brain tumours, i.e. HG gliomas, metastases and meningiomas have been subjected to the analysis. The results of contrast enhanced lesions detection have been compared with manual delineations performed independently by two experts, yielding 64.84% sensitivity, 99.89% specificity and 71.83% Dice Similarity Coefficient (DSC) for twenty analysed studies of subjects with brain tumours diagnosed.

  7. Automatic information timeliness assessment of diabetes web sites by evidence based medicine.

    PubMed

    Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba

    2014-11-01

    Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites.

  8. Water Quality Assessment using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Haque, Saad Ul

    2016-07-01

    The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality

  9. Automated Data Quality Assessment of Marine Sensors

    PubMed Central

    Timms, Greg P.; de Souza, Paulo A.; Reznik, Leon; Smith, Daniel V.

    2011-01-01

    The automated collection of data (e.g., through sensor networks) has led to a massive increase in the quantity of environmental and other data available. The sheer quantity of data and growing need for real-time ingestion of sensor data (e.g., alerts and forecasts from physical models) means that automated Quality Assurance/Quality Control (QA/QC) is necessary to ensure that the data collected is fit for purpose. Current automated QA/QC approaches provide assessments based upon hard classifications of the gathered data; often as a binary decision of good or bad data that fails to quantify our confidence in the data for use in different applications. We propose a novel framework for automated data quality assessments that uses Fuzzy Logic to provide a continuous scale of data quality. This continuous quality scale is then used to compute error bars upon the data, which quantify the data uncertainty and provide a more meaningful measure of the data’s fitness for purpose in a particular application compared with hard quality classifications. The design principles of the framework are presented and enable both data statistics and expert knowledge to be incorporated into the uncertainty assessment. We have implemented and tested the framework upon a real time platform of temperature and conductivity sensors that have been deployed to monitor the Derwent Estuary in Hobart, Australia. Results indicate that the error bars generated from the Fuzzy QA/QC implementation are in good agreement with the error bars manually encoded by a domain expert. PMID:22163714

  10. External quality assessment scheme and laboratory accreditation in Indonesia.

    PubMed

    Timan, Ina S; Aulia, Diana; Santoso, Witono

    2002-02-01

    The National Program on External Quality Assessment Scheme (NEQAS) in Indonesia was first started in 1979, organized by the Indonesian Ministry of Health collaborating with professional bodies. The first trial was for clinical chemistry test with 2 cycles per year, followed by the hematology NEQAS in 1986 in collaboration with WHO-Royal Post Graduate Medical School London. After that, the schemes for serology, microbiology and parasitology were also organized. Around 500-600 laboratories throughout Indonesia participated each year in these quality control schemes, 2-4 cycles per year. Samples would be sent to participants and results will be given back to each laboratory. Poor performers should participate in the workshop or training course conducted by the Central Health Laboratory to improve their results. Participation in this NEQAS is mandatory for obtaining the laboratory license, and the Ministry of Health uses these schemes as one of the means for monitoring and coordinating the performance of laboratories throughout Indonesia. There are also some other EQAS (External Quality Assessment Scheme) programs conducted by professional bodies, such as for hemostasis, clinical chemistry and serology. During the course of conducting these schemes, it could be observed that manual methods were gradually changed to the automatic methods, especially for the clinical chemistry and hematology laboratories, which counts also for improvements of their results. Since the last 6 years, the Ministry of Health also began to conduct the Accreditation System evaluation for hospitals, including the laboratory departments. There are 7 standards that were evaluated, such as the aspect of the organization, administration and management, staffing, facilities and equipment, standard operating procedures, research and developments and quality control. This accreditation program is still in progress for all public and private hospital laboratories.

  11. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  12. External Quality Assessment Schemes in Latin America.

    PubMed

    Migliarino, Gabriel Alejandro

    2015-11-01

    As professionals of the clinical laboratory we must generate clinically useful results, products and services for the patients' health care. Laboratories must participate in one or more proficiency testing (PT) or external quality assessment (EQA) programs as part of routine quality assurance. Nevertheless participating per se is not enough. There are critical factors to take into consideration when selecting a PT or EQA providers. In most cases the survey's providers offer assigned values obtained from consensus of results provided by the participants for comparison, it is critical to evaluate consistency of the comparison group before interpretation and decision-making. PMID:27683496

  13. External Quality Assessment Schemes in Latin America

    PubMed Central

    2015-01-01

    As professionals of the clinical laboratory we must generate clinically useful results, products and services for the patients’ health care. Laboratories must participate in one or more proficiency testing (PT) or external quality assessment (EQA) programs as part of routine quality assurance. Nevertheless participating per se is not enough. There are critical factors to take into consideration when selecting a PT or EQA providers. In most cases the survey’s providers offer assigned values obtained from consensus of results provided by the participants for comparison, it is critical to evaluate consistency of the comparison group before interpretation and decision-making.

  14. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  15. Assessing the quality of cost management

    SciTech Connect

    Fayne, V.; McAllister, A.; Weiner, S.B.

    1995-12-31

    Managing environmental programs can be effective only when good cost and cost-related management practices are developed and implemented. The Department of Energy`s Office of Environmental Management (EM), recognizing this key role of cost management, initiated several cost and cost-related management activities including the Cost Quality Management (CQM) Program. The CQM Program includes an assessment activity, Cost Quality Management Assessments (CQMAs), and a technical assistance effort to improve program/project cost effectiveness. CQMAs provide a tool for establishing a baseline of cost-management practices and for measuring improvement in those practices. The result of the CQMA program is an organization that has an increasing cost-consciousness, improved cost-management skills and abilities, and a commitment to respond to the public`s concerns for both a safe environment and prudent budget outlays. The CQMA program is part of the foundation of quality management practices in DOE. The CQMA process has contributed to better cost and cost-related management practices by providing measurements and feedback; defining the components of a quality cost-management system; and helping sites develop/improve specific cost-management techniques and methods.

  16. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  17. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning

    PubMed Central

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2016-01-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients (fv) to account for most of the spectrum energy (Σfv2). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth—the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The ‘leave-one-out’ cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves (R = 91%−96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  18. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  19. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    PubMed

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  20. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    NASA Astrophysics Data System (ADS)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  1. Quality Assessment of Domesticated Animal Genome Assemblies

    PubMed Central

    Seemann, Stefan E.; Anthon, Christian; Palasca, Oana; Gorodkin, Jan

    2015-01-01

    The era of high-throughput sequencing has made it relatively simple to sequence genomes and transcriptomes of individuals from many species. In order to analyze the resulting sequencing data, high-quality reference genome assemblies are required. However, this is still a major challenge, and many domesticated animal genomes still need to be sequenced deeper in order to produce high-quality assemblies. In the meanwhile, ironically, the extent to which RNAseq and other next-generation data is produced frequently far exceeds that of the genomic sequence. Furthermore, basic comparative analysis is often affected by the lack of genomic sequence. Herein, we quantify the quality of the genome assemblies of 20 domesticated animals and related species by assessing a range of measurable parameters, and we show that there is a positive correlation between the fraction of mappable reads from RNAseq data and genome assembly quality. We rank the genomes by their assembly quality and discuss the implications for genotype analyses. PMID:27279738

  2. Performance assessment of an RFID system for automatic surgical sponge detection in a surgery room.

    PubMed

    Dinis, H; Zamith, M; Mendes, P M

    2015-01-01

    A retained surgical instrument is a frequent incident in medical surgery rooms all around the world, despite being considered an avoidable mistake. Hence, an automatic detection solution of the retained surgical instrument is desirable. In this paper, the use of millimeter waves at the 60 GHz band for surgical material RFID purposes is evaluated. An experimental procedure to assess the suitability of this frequency range for short distance communications with multiple obstacles was performed. Furthermore, an antenna suitable to be incorporated in surgical materials, such as sponges, is presented. The antenna's operation characteristics are evaluated as to determine if it is adequate for the studied application over the given frequency range, and under different operating conditions, such as varying sponge water content.

  3. Assessing the Quality of Bioforensic Signatures

    SciTech Connect

    Sego, Landon H.; Holmes, Aimee E.; Gosink, Luke J.; Webb-Robertson, Bobbie-Jo M.; Kreuzer, Helen W.; Anderson, Richard M.; Brothers, Alan J.; Corley, Courtney D.; Tardiff, Mark F.

    2013-06-04

    We present a mathematical framework for assessing the quality of signature systems in terms of fidelity, cost, risk, and utility—a method we refer to as Signature Quality Metrics (SQM). We demonstrate the SQM approach by assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system consists of four chemical assays designed to identify various ingredients that could be used to produce the culture medium. The analytical measurements resulting from any combination of these four assays can be used in a Bayesian network to predict the probabilities that the microorganism was grown using one of eleven culture media. We evaluated fifteen combinations of the signature system by removing one or more of the assays from the Bayes network. We demonstrated that SQM can be used to distinguish between the various combinations in terms of attributes of interest. The approach assisted in clearly identifying assays that were least informative, largely in part because they only could discriminate between very few culture media, and in particular, culture media that are rarely used. There are limitations associated with the data that were used to train and test the signature system. Consequently, our intent is not to draw formal conclusions regarding this particular bioforensic system, but rather to illustrate an analytical approach that could be useful in comparing one signature system to another.

  4. Assessing Assessment Quality: Criteria for Quality Assurance in Design of (Peer) Assessment for Learning--A Review of Research Studies

    ERIC Educational Resources Information Center

    Tillema, Harm; Leenknecht, Martijn; Segers, Mien

    2011-01-01

    The interest in "assessment for learning" (AfL) has resulted in a search for new modes of assessment that are better aligned to students' learning how to learn. However, with the introduction of new assessment tools, also questions arose with respect to the quality of its measurement. On the one hand, the appropriateness of traditional,…

  5. Bacteriological Assessment of Spoon River Water Quality

    PubMed Central

    Lin, Shundar; Evans, Ralph L.; Beuscher, Davis B.

    1974-01-01

    Data from a study of five stations on the Spoon River, Ill., during June 1971 through May 1973 were analyzed for compliance with Illinois Pollution Control Board's water quality standards of a geometric mean limitation of 200 fecal coliforms per 100 ml. This bacterial limit was achieved about 20% of the time during June 1971 through May 1972, and was never achieved during June 1972 through May 1973. Ratios of fecal coliform to total coliform are presented. By using fecal coliform-to-fecal streptococcus ratios to sort out fecal pollution origins, it was evident that a concern must be expressed not only for municipal wastewater effluents to the receiving stream, but also for nonpoint sources of pollution in assessing the bacterial quality of a stream. PMID:4604145

  6. Quality assessment of strawberries (Fragaria species).

    PubMed

    Azodanlou, Ramin; Darbellay, Charly; Luisier, Jean-Luc; Villettaz, Jean-Claude; Amadò, Renato

    2003-01-29

    Several cultivars of strawberries (Fragaria sp.), grown under different conditions, were analyzed by both sensory and instrumental methods. The overall appreciation, as expressed by consumers, was mainly reflected by attributes such as sweetness and aroma. No strong correlation was obtained with odor, acidity, juiciness, or firmness. The sensory quality of strawberries can be assessed with a good level of confidence by measuring the total sugar level ( degrees Brix) and the total amount of volatile compounds. Sorting out samples using the score obtained with a hedonic test (called the "hedonic classification method") allowed the correlation between consumers' appreciation and instrumental data to be considerably strengthened. On the basis of the results obtained, a quality model was proposed. Quantitative GC-FID analyses were performed to determine the major aroma components of strawberries. Methyl butanoate, ethyl butanoate, methyl hexanoate, cis-3-hexenyl acetate, and linalool were identified as the most important compounds for the taste and aroma of strawberries. PMID:12537447

  7. Fully automatic measuring system for assessing masticatory performance using β-carotene-containing gummy jelly.

    PubMed

    Nokubi, T; Yasui, S; Yoshimuta, Y; Kida, M; Kusunoki, C; Ono, T; Maeda, Y; Nokubi, F; Yokota, K; Yamamoto, T

    2013-02-01

    Despite the importance of masticatory performance in health promotion, assessment of masticatory performance has not been widely conducted to date because the methods are labour intensive. The purpose of this study is to investigate the accuracy of a novel system for automatically measuring masticatory performance that uses β-carotene-containing gummy jelly. To investigate the influence of rinsing time on comminuted jelly pieces expectorated from the oral cavity, divided jelly pieces were treated with two types of dye solution and then rinsed for various durations. Changes in photodiode (light receiver) voltages from light emitted through a solution of dissolved β-carotene from jelly pieces under each condition were compared with those of unstained jelly. To investigate the influence of dissolving time, changes in light receiver voltage resulting from an increase in division number were compared between three dissolving times. For all forms of divided test jelly and rinsing times, no significant differences in light receiver voltage were observed between any of the stain groups and the control group. Voltages decreased in a similar manner for all forms of divided jelly as dissolving time increased. The highest coefficient of determination (R(2)  = 0·979) between the obtained voltage and the increased surface area of each divided jelly was seen at the 10 s dissolving time. These results suggested that our fully automatic system can estimate the increased surface area of comminuted gummy jelly as a parameter of masticatory performance with high accuracy after rinsing and dissolving operations of 10 s each.

  8. Validation of the automatic image analyser to assess retinal vessel calibre (ALTAIR): a prospective study protocol

    PubMed Central

    Garcia-Ortiz, Luis; Gómez-Marcos, Manuel A; Recio-Rodríguez, Jose I; Maderuelo-Fernández, Jose A; Chamoso-Santos, Pablo; Rodríguez-González, Sara; de Paz-Santana, Juan F; Merchan-Cifuentes, Miguel A; Corchado-Rodríguez, Juan M

    2014-01-01

    Introduction The fundus examination is a non-invasive evaluation of the microcirculation of the retina. The aim of the present study is to develop and validate (reliability and validity) the ALTAIR software platform (Automatic image analyser to assess retinal vessel calibre) in order to analyse its utility in different clinical environments. Methods and analysis A cross-sectional study in the first phase and a prospective observational study in the second with 4 years of follow-up. The study will be performed in a primary care centre and will include 386 participants. The main measurements will include carotid intima-media thickness, pulse wave velocity by Sphygmocor, cardio-ankle vascular index through the VASERA VS-1500, cardiac evaluation by a digital ECG and renal injury by microalbuminuria and glomerular filtration. The retinal vascular evaluation will be performed using a TOPCON TRCNW200 non-mydriatic retinal camera to obtain digital images of the retina, and the developed software (ALTAIR) will be used to automatically calculate the calibre of the retinal vessels, the vascularised area and the branching pattern. For software validation, the intraobserver and interobserver reliability, the concurrent validity of the vascular structure and function, as well as the association between the estimated retinal parameters and the evolution or onset of new lesions in the target organs or cardiovascular diseases will be examined. Ethics and dissemination The study has been approved by the clinical research ethics committee of the healthcare area of Salamanca. All study participants will sign an informed consent to agree to participate in the study in compliance with the Declaration of Helsinki and the WHO standards for observational studies. Validation of this tool will provide greater reliability to the analysis of retinal vessels by decreasing the intervention of the observer and will result in increased validity through the use of additional information, especially

  9. Institutional Quality Assessment of Higher Education: Dimensions, Criteria and Indicators

    ERIC Educational Resources Information Center

    Savickiene, Izabela; Pukelis, Kestutis

    2004-01-01

    The article discusses dimensions and criteria, which are used to assess the quality of higher education in different countries. The paper presents dimensions and criteria that could be appropriate for assessment of the quality of higher education at Lithuanian universities. Quality dimensions, assessment criteria and indicators are defined and…

  10. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  11. [Quality assessment of continuing medical education].

    PubMed

    Lipp, M

    1996-04-01

    Medical performance is subject to quality control. Continuous advanced training (CAT) and continuous medical education (CME) are essential, and quality must be checked and assured: structure (contents, organizational form, framework, term, demands on teachers), process (term of the CAT, interaction between teachers and participants) and results (satisfaction and acceptance, increased knowledge, influence on medical treatment, improvement of the success rate of medical treatment. In emergency medicine one must differentiate between the necessity for CAT (e.g., certified proof required for working as an emergency physician) and a desire for CME (the individual task of the physician). The diversity of forms of CAT/CME reflects the different individual requirements. Using the new German guidelines to obtain qualification as an emergency physician, "Fachkundenachweises Rettungsdienst" offers measures for quality assessment and assurance can be obtained. STRUCTURE QUALITY: The recommendations for obtaining the "Fachkundenachweis Rettungsdienst" which have been valid until now date from the year 1983 and were set fourth explained very differently in the individual countries medical boards. This led to problems in the comparability of the essential CAT. The quality of the structure has now been improved by establishing new minimum requirements for clinical activity, specification of particular knowledge, number of supervised calls for the emergency car as well as participation in interdisciplinary CAT courses, dealing with general and special aspects of emergency medicine. The aim of these measures is not the (senseless) regimentation of CAT training measures, but the qualified transfer of specific medical knowledge and treatment guidelines. PROCESS QUALITY: On qualifying, hardly any physician has any didactic and/or rhetorical education; the physician must make a personal effort to obtain a qualification of this kind. Conventional and commonly practised forms of learning

  12. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  13. Assessing the Effects of Automatically Delivered Stimulation on the Use of Simple Exercise Tools by Students with Multiple Disabilities.

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop

    2003-01-01

    This study assessed the effects of automatically delivered stimulation on the activity level and mood of three students with multiple disabilities during their use of a stepper and a stationary bicycle. Stimuli from a pool of favorite stimulus events were delivered electronically while students were actively exercising. Findings indicated the…

  14. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness. PMID:25122842

  15. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  16. Comprehensive automatic assessment of retinal vascular abnormalities for computer-assisted retinopathy grading.

    PubMed

    Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon

    2014-01-01

    One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.

  17. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  18. Assessing Negative Automatic Thoughts: Psychometric Properties of the Turkish Version of the Cognition Checklist

    PubMed Central

    Batmaz, Sedat; Ahmet Yuncu, Ozgur; Kocbiyik, Sibel

    2015-01-01

    Background: Beck’s theory of emotional disorder suggests that negative automatic thoughts (NATs) and the underlying schemata affect one’s way of interpreting situations and result in maladaptive coping strategies. Depending on their content and meaning, NATs are associated with specific emotions, and since they are usually quite brief, patients are often more aware of the emotion they feel. This relationship between cognition and emotion, therefore, is thought to form the background of the cognitive content specificity hypothesis. Researchers focusing on this hypothesis have suggested that instruments like the cognition checklist (CCL) might be an alternative to make a diagnostic distinction between depression and anxiety. Objectives: The aim of the present study was to assess the psychometric properties of the Turkish version of the CCL in a psychiatric outpatient sample. Patients and Methods: A total of 425 psychiatric outpatients 18 years of age and older were recruited. After a structured diagnostic interview, the participants completed the hospital anxiety depression scale (HADS), the automatic thoughts questionnaire (ATQ), and the CCL. An exploratory factor analysis was performed, followed by an oblique rotation. The internal consistency, test-retest reliability, and concurrent and discriminant validity analyses were undertaken. Results: The internal consistency of the CCL was excellent (Cronbach’s α = 0.95). The test-retest correlation coefficients were satisfactory (r = 0.80, P < 0.001 for CCL-D, and r = 0.79, P < 0.001 for CCL-A). The exploratory factor analysis revealed that a two-factor solution best fit the data. This bidimensional factor structure explained 51.27 % of the variance of the scale. The first factor consisted of items related to anxious cognitions, and the second factor of depressive cognitions. The CCL subscales significantly correlated with the ATQ (rs 0.44 for the CCL-D, and 0.32 for the CCL-A) as well as the other measures of

  19. Quality assessment of Landsat surface reflectance products using MODIS data

    NASA Astrophysics Data System (ADS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric F.; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  20. Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    NASA Technical Reports Server (NTRS)

    Feng, Min; Huang, Chengquan; Channan, Saurabh; Vermote, Eric; Masek, Jeffrey G.; Townshend, John R.

    2012-01-01

    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat

  1. Quality assessment of clinical computed tomography

    NASA Astrophysics Data System (ADS)

    Berndt, Dorothea; Luckow, Marlen; Lambrecht, J. Thomas; Beckmann, Felix; Müller, Bert

    2008-08-01

    Three-dimensional images are vital for the diagnosis in dentistry and cranio-maxillofacial surgery. Artifacts caused by highly absorbing components such as metallic implants, however, limit the value of the tomograms. The dominant artifacts observed are blowout and streaks. Investigating the artifacts generated by metallic implants in a pig jaw, the data acquisition for the patients in dentistry should be optimized in a quantitative manner. A freshly explanted pig jaw including related soft-tissues served as a model system. Images were recorded varying the accelerating voltage and the beam current. The comparison with multi-slice and micro computed tomography (CT) helps to validate the approach with the dental CT system (3D-Accuitomo, Morita, Japan). The data are rigidly registered to comparatively quantify their quality. The micro CT data provide a reasonable standard for quantitative data assessment of clinical CT.

  2. Surgical quality assessment. A simplified approach.

    PubMed

    DeLong, D L

    1991-10-01

    The current approach to QA primarily involves taking action when problems are discovered and designing a documentation system that records the deliverance of quality care. Involving the entire staff helps eliminate problems before they occur. By keeping abreast of current problems and soliciting input from staff members, the QA at our hospital has improved dramatically. The cross-referencing of JCAHO and AORN standards on the assessment form and the single-sheet reporting form expedite the evaluation process and simplify record keeping. The bulletin board increases staff members' understanding of QA and boosts morale and participation. A sound and effective QA program does not require reorganizing an entire department, nor should it invoke negative connotations. Developing an effective QA program merely requires rethinking current processes. The program must meet the department's specific needs, and although many departments concentrate on documentation, auditing charts does not give a complete picture of the quality of care delivered. The QA committee must employ a variety of data collection methods on multiple indicators to ensure an accurate representation of the care delivered, and they must not overlook any issues that directly affect patient outcomes. PMID:1952907

  3. Automatic roof plane detection and analysis in airborne lidar point clouds for solar potential assessment.

    PubMed

    Jochem, Andreas; Höfle, Bernhard; Rutzinger, Martin; Pfeifer, Norbert

    2009-01-01

    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m(2).

  4. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  5. Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013

    USGS Publications Warehouse

    Arnold, Terri L.; DeSimone, Leslie A.; Bexfield, Laura M.; Lindsey, Bruce D.; Barlow, Jeannie R.; Kulongoski, Justin T.; Musgrove, Marylynn; Kingsbury, James A.; Belitz, Kenneth

    2016-06-20

    Groundwater-quality data were collected from 748 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program from May 2012 through December 2013. The data were collected from four types of well networks: principal aquifer study networks, which assess the quality of groundwater used for public water supply; land-use study networks, which assess land-use effects on shallow groundwater quality; major aquifer study networks, which assess the quality of groundwater used for domestic supply; and enhanced trends networks, which evaluate the time scales during which groundwater quality changes. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including major ions, nutrients, trace elements, volatile organic compounds, pesticides, and radionuclides. These groundwater quality data are tabulated in this report. Quality-control samples also were collected; data from blank and replicate quality-control samples are included in this report.

  6. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  7. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    PubMed

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  8. 42 CFR 493.1299 - Standard: Postanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Postanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Postanalytic Systems § 493.1299 Standard: Postanalytic systems quality assessment. (a)...

  9. 42 CFR 493.1249 - Standard: Preanalytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Preanalytic systems quality assessment... AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing Preanalytic Systems § 493.1249 Standard: Preanalytic systems quality assessment. (a)...

  10. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis.

  11. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  12. Beef quality assessed at European research centres.

    PubMed

    Dransfield, E; Nute, G R; Roberts, T A; Boccard, R; Touraille, C; Buchter, L; Casteels, M; Cosentino, E; Hood, D E; Joseph, R L; Schon, I; Paardekooper, E J

    1984-01-01

    Loin steaks and cubes of M. semimembranosus from eight (12 month old) Galloway steers and eight (16-18 month old) Charolais cross steers raised in England and from which the meat was conditioned for 2 or 10 days, were assessed in research centres in Belgium, Denmark, England, France, the Federal Republic of Germany, Ireland, Italy and the Netherlands. Laboratory panels assessed meat by grilling the steaks and cooking the cubes in casseroles according to local custom using scales developed locally and by scales used frequently at other research centres. The meat was mostly of good quality but with sufficient variation to obtain meaningful comparisons. Tenderness and juiciness were assessed most, and flavour least, consistently. Over the 32 meats, acceptability of steaks and casseroles was in general compounded from tenderness, juiciness and flavour. However, when the meat was tough, it dominated the overall judgement; but when tender, flavour played an important rôle. Irish and English panels tended to weight more on flavour and Italian panels on tenderness and juiciness. Juciness and tenderness were well correlated among all panels except in Italy and Germany. With flavour, however, Belgian, Irish, German and Dutch panels ranked the meats similarly and formed a group distinct from the others which did not. The panels showed a similar grouping for judgements of acceptability. French and Belgian panels judged the steaks from the older Charolais cross steers to have more flavour and be more juicy than average and tended to prefer them. Casseroles from younger steers were invariably preferred although the French and Belgian panels judged aged meat from older animals equally acceptable. These regional biases were thought to be derived mainly from differences in cooking, but variations in experience and perception of assessors also contributed. PMID:22055992

  13. QUALITY: A program to assess basis set quality

    NASA Astrophysics Data System (ADS)

    Sordo, J. A.

    1998-09-01

    A program to analyze in detail the quality of basis sets is presented. The information provided by the application of a wide variety of (atomic and/or molecular) quality criteria is processed by using a methodology that allows one to determine the most appropriate quality test to select a basis set to compute a given (atomic or molecular) property. Fuzzy set theory is used to choose the most adequate basis set to compute simultaneously a set of properties.

  14. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The..., assess, and when indicated, correct problems identified in the analytic systems specified in §§...

  15. Assessment of sleep quality in powernapping.

    PubMed

    Takhtsabzy, Bashaer K; Thomsen, Carsten E

    2011-01-01

    The purpose of this study is to assess the Sleep Quality (SQ) in powernapping. The contributed factors for SQ assessment are time of Sleep Onset (SO), Sleep Length (SL), Sleep Depth (SD), and detection of sleep events (K-complex (KC) and Sleep Spindle (SS)). Data from daytime nap for 10 subjects, 2 days each, including EEG and ECG were recorded. The SD and sleep events were analyzed by applying spectral analysis. The SO time was detected by a combination of signal spectral analysis, Slow Rolling Eye Movement (SREM) detection, Heart Rate Variability (HRV) analysis and EEG segmentation using both Autocorrelation Function (ACF), and Crosscorrelation Function (CCF) methods. The EEG derivation FP1-FP2 filtered in a narrow band and used as an alternative to EOG for SREM detection. The ACF and CCF segmentation methods were also applied for detection of sleep events. The ACF method detects segment boundaries based on single channel analysis, while the CCF includes spatial variation from multiple EEG derivation. The results indicate that SREM detection using EEG is possible and can be used as input together with power spectral analysis to enhance SO detection. Both segmentation methods could detect SO as a segment boundary. Additionally they were able to contribute to detection of KC and SS events. The CCF method was more sensitive to spatial EEG changes and the exact segment boundaries varied slightly between the two methods. The HRV analysis revealed, that low and very low frequency variations in the heart rate was highly correlated with the EEG changes during both SO and variations in SD. Analyzing the relationship between the sleep events and SD showed a negative correlation between the Delta and Sigma activity. Analyzing the subjective measurement (SM) showed that there were a positive correlation between the SL and rated SQ. This preliminary study showed that the factors contributing to the overall SQ during powernapping can be assessed markedly better using a fusion

  16. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    NASA Astrophysics Data System (ADS)

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-05-01

    Climate change has already a large impact on the availability of water resources. Many regions in South-East Asia are assumed to receive less water in the future, dramatically impacting the production of the most important staple food: rice (Oryza sativa L.). Rice is the primary food source for nearly half of the World's population, and is the only cereal that can grow under wetland conditions. Especially anaerobic (flooded) rice fields require high amounts of water but also have higher yields than aerobic produced rice. In the past different methods were developed to reduce the water use in rice paddies, like alternative wetting and drying or the use of mixed cropping systems with aerobic (non-flooded) rice and alternative crops such as maize. A more detailed understanding of water and nutrient cycling in rice-based cropping systems is needed to reduce water use, and requires the investigation of hydrological and biochemical processes as well as transport dynamics at the field scale. New developments in analytical devices permit monitoring parameters at high temporal resolutions and at acceptable costs without much necessary maintenance or analysis over longer periods. Here we present a new type of automatic sampling set-up that facilitates in situ analysis of hydrometric information, stable water isotopes and nitrate concentrations in spatially differentiated agricultural fields. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer for monitoring nitrate content and various water level sensors for hydrometric information. The whole system is maintained with special developed software for remote control of the system via internet. We

  17. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  18. Comprehensive assessment of automatic structural alignment against a manual standard, the scop classification of proteins.

    PubMed Central

    Gerstein, M.; Levitt, M.

    1998-01-01

    We apply a simple method for aligning protein sequences on the basis of a 3D structure, on a large scale, to the proteins in the scop classification of fold families. This allows us to assess, understand, and improve our automatic method against an objective, manually derived standard, a type of comprehensive evaluation that has not yet been possible for other structural alignment algorithms. Our basic approach directly matches the backbones of two structures, using repeated cycles of dynamic programming and least-squares fitting to determine an alignment minimizing coordinate difference. Because of simplicity, our method can be readily modified to take into account additional features of protein structure such as the orientation of side chains or the location-dependent cost of opening a gap. Our basic method, augmented by such modifications, can find reasonable alignments for all but 1.5% of the known structural similarities in scop, i.e., all but 32 of the 2,107 superfamily pairs. We discuss the specific protein structural features that make these 32 pairs so difficult to align and show how our procedure effectively partitions the relationships in scop into different categories, depending on what aspects of protein structure are involved (e.g., depending on whether or not consideration of side-chain orientation is necessary for proper alignment). We also show how our pairwise alignment procedure can be extended to generate a multiple alignment for a group of related structures. We have compared these alignments in detail with corresponding manual ones culled from the literature. We find good agreement (to within 95% for the core regions), and detailed comparison highlights how particular protein structural features (such as certain strands) are problematical to align, giving somewhat ambiguous results. With these improvements and systematic tests, our procedure should be useful for the development of scop and the future classification of protein folds. PMID

  19. Comparative assessment of several automatic CPAP devices' responses: a bench test study

    PubMed Central

    Isetta, Valentina; Navajas, Daniel; Montserrat, Josep M.

    2015-01-01

    Automatic continuous positive airway pressure (APAP) devices adjust the delivered pressure based on the breathing patterns of the patient and, accordingly, they may be more suitable for patients who have a variety of pressure demands during sleep based on factors such as body posture, sleep stage or variability between nights. Devices from different manufacturers incorporate distinct algorithms and may therefore respond differently when subjected to the same disturbed breathing pattern. Our objective was to assess the response of several currently available APAP devices in a bench test. A computer-controlled model mimicking the breathing pattern of a patient with obstructive sleep apnoea (OSA) was connected to different APAP devices for 2-h tests during which flow and pressure readings were recorded. Devices tested were AirSense 10 (ResMed), Dreamstar (Sefam), Icon (Fisher & Paykel), Resmart (BMC), Somnobalance (Weinmann), System One (Respironics) and XT-Auto (Apex). Each device was tested twice. The response of each device was considerably different. Whereas some devices were able to normalise breathing, in some cases exceeding the required pressure, other devices did not eliminate disturbed breathing events (mainly prolonged flow limitation). Mean and maximum pressures ranged 7.3–14.6 cmH2O and 10.4–17.9 cmH2O, respectively, and the time to reach maximum pressure varied from 4.4 to 96.0 min. Each APAP device uses a proprietary algorithm and, therefore, the response to a bench simulation of OSA varied significantly. This must be taken into account for nasal pressure treatment of OSA patients and when comparing results from clinical trials. PMID:27730142

  20. Semi-automatic assessment of pediatric hydronephrosis severity in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Cerrolaza, Juan J.; Otero, Hansel; Yao, Peter; Biggs, Elijah; Mansoor, Awais; Ardon, Roberto; Jago, James; Peters, Craig A.; Linguraru, Marius George

    2016-03-01

    Hydronephrosis is the most common abnormal finding in pediatric urology. Thanks to its non-ionizing nature, ultrasound (US) imaging is the preferred diagnostic modality for the evaluation of the kidney and the urinary track. However, due to the lack of correlation of US with renal function, further invasive and/or ionizing studies might be required (e.g., diuretic renograms). This paper presents a computer-aided diagnosis (CAD) tool for the accurate and objective assessment of pediatric hydronephrosis based on morphological analysis of kidney from 3DUS scans. The integration of specific segmentation tools in the system, allows to delineate the relevant renal structures from 3DUS scans of the patients with minimal user interaction, and the automatic computation of 90 anatomical features. Using the washout half time (T1/2) as indicative of renal obstruction, an optimal subset of predictive features is selected to differentiate, with maximum sensitivity, those severe cases where further attention is required (e.g., in the form of diuretic renograms), from the noncritical ones. The performance of this new 3DUS-based CAD system is studied for two clinically relevant T1/2 thresholds, 20 and 30 min. Using a dataset of 20 hydronephrotic cases, pilot experiments show how the system outperforms previous 2D implementations by successfully identifying all the critical cases (100% of sensitivity), and detecting up to 100% (T1/2 = 20 min) and 67% (T1/2 = 30 min) of non-critical ones for T1/2 thresholds of 20 and 30 min, respectively.

  1. Automatic data-quality monitoring for continuous GPS tracking stations in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, T. K.; Wang, C. S.; Chao, B. F.; Chen, C. S.; Lee, C. W.

    2007-10-01

    Taiwan has more than 300 Global Positioning System (GPS) tracking stations maintained by the Ministry of the Interior (MOI), Academia Sinica, the Central Weather Bureau and the Central Geological Survey. In the future, GPS tracking stations may replace the GPS control points after being given a legal status. Hence, the data quality of the tracking stations is an increasingly significant factor. This study considers the feasibility of establishing a system for monitoring GPS receivers. This investigation employs many data-quality indices and examines the relationship of these indices and the positioning precision. The frequency stability of the GPS receiver is the most important index; the cycle slip is the second index and the multipath is the third index. An auto-analytical system for analysing GPS data quality and monitoring the MOI's tracking stations can quickly find and resolve problems, or changes in station environment, to maintain high data quality for the tracking stations.

  2. Microbial quality assessment of household greywater.

    PubMed

    O'Toole, Joanne; Sinclair, Martha; Malawaraarachchi, Manori; Hamilton, Andrew; Barker, S Fiona; Leder, Karin

    2012-09-01

    A monitoring program was undertaken to assess the microbial quality of greywater collected from 93 typical households in Melbourne, Australia. A total of 185 samples, comprising 75 washing machine wash, 74 washing machine rinse and 36 bathroom samples were analysed for the faecal indicator Escherichia coli. Of these, 104 were also analysed for genetic markers of pathogenic E coli and 111 for norovirus (genogroups GI and GII), enterovirus and rotavirus using RT-PCR. Enteric viruses were detected in 20 out of the 111 (18%) samples comprising 16 washing machine wash water and 4 bathroom samples. Eight (7%) samples were positive for enterovirus, twelve (11%) for norovirus genogroup GI, one (1%) for norovirus genogroup GII and another (1%) for rotavirus. Two washing machine samples contained more than one virus. Typical pathogenic E. coli were detected in 3 out of 104 (3%) samples and atypical enteropathogenic E. coli in 11 (11%) of samples. Levels of indicator E. coli were highly variable and the presence of E. coli was not associated with the presence of human enteric viruses in greywater. There was also little correlation between reported gastrointestinal illness in households and detection of pathogens in greywater.

  3. a Multi-Sensor Micro Uav Based Automatic Rapid Mapping System for Damage Assessment in Disaster Areas

    NASA Astrophysics Data System (ADS)

    Jeon, E.; Choi, K.; Lee, I.; Kim, H.

    2013-08-01

    Damage assessment is an important step toward the restoration of the severely affected areas due to natural disasters or accidents. For more accurate and rapid assessment, one should utilize geospatial data such as ortho-images acquired from the damaged areas. Change detection based on the geospatial data before and after the damage can make possible fast and automatic assessment with a reasonable accuracy. Accordingly, there have been significant demands on a rapid mapping system, which can provide the orthoimages of the damaged areas to the specialists and decision makers in disaster management agencies. In this study, we are developing a UAV based rapid mapping system that can acquire multi-sensory data in the air and generate ortho-images from the data on the ground in a rapid and automatic way. The proposed system consists of two main segments, aerial and ground segments. The aerial segment is to acquire sensory data through autonomous flight over the specified target area. It consists of a micro UAV platform, a mirror-less camera, a GPS, a MEMS IMU, and sensor integration and synchronization module. The ground segment is to receive and process the multi-sensory data to produce orthoimages in rapid and automatic ways. It consists of a computer with appropriate software for flight planning, data reception, georeferencing, and orthoimage generation. In the middle of this on-going project, we will introduce the overview of the project, describe the main components of each segment and provide intermediate results from preliminary test flights.

  4. Informatics: essential infrastructure for quality assessment and improvement in nursing.

    PubMed Central

    Henry, S B

    1995-01-01

    In recent decades there have been major advances in the creation and implementation of information technologies and in the development of measures of health care quality. The premise of this article is that informatics provides essential infrastructure for quality assessment and improvement in nursing. In this context, the term quality assessment and improvement comprises both short-term processes such as continuous quality improvement (CQI) and long-term outcomes management. This premise is supported by 1) presentation of a historical perspective on quality assessment and improvement; 2) delineation of the types of data required for quality assessment and improvement; and 3) description of the current and potential uses of information technology in the acquisition, storage, transformation, and presentation of quality data, information, and knowledge. PMID:7614118

  5. High-Quality Optical Light Curves of Supernovae with the Katzman Automatic Imaging Telescope

    NASA Astrophysics Data System (ADS)

    Modjaz, M.; Li, W. D.; Filippenko, A. V.; Treffers, R. R.

    2000-05-01

    We present some results from the Lick Observatory Supernova Search (LOSS), which is being conducted with the 0.75-m Katzman Automatic Imaging Telescope (KAIT) and achieved successful operation in mid-1998. Since the commissioning of KAIT in 1996, significant improvements have been made to the software and hardware (such as installation of an AP7 CCD camera). A list is given of the roughly 60 nearby (z < 0.1) supernovae (SNe) discovered with LOSS, together with their spectral types and other relevant properties. We present well-sampled multi-color light curves of a selection of SNe out of a pool of about 30 SNe monitored within the past 2 years. We emphasize the importance of light-curve analysis of Type Ia and Type II SNe, since the detailed study of nearby SNe is crucial to establishing SNe as cosmological distance indicators. We also discuss preliminary conclusions derived from the surprisingly high rate of peculiar Type Ia SNe and from comparisons of their light curves. Projects which attempt to use our data for statistical analysis and for studies of bulk flows are underway. KAIT and its associated science have been made possible with funding or donations from NSF, NASA, the Sylvia and Jim Katzman Foundation, Sun Microsystems Inc., the Hewlett-Packard Company, Photometrics Ltd., AutoScope Corporation, and the University of California.

  6. In Search of Quality Criteria in Peer Assessment Practices

    ERIC Educational Resources Information Center

    Ploegh, Karin; Tillema, Harm H.; Segers, Mien S. R.

    2009-01-01

    With the increasing popularity of peer assessment as an assessment tool, questions may arise about its measurement quality. Among such questions, the extent peer assessment practices adhere to standards of measurement. It has been claimed that new forms of assessment, require new criteria to judge their validity and reliability, since they aim for…

  7. Comparison of water-quality samples collected by siphon samplers and automatic samplers in Wisconsin

    USGS Publications Warehouse

    Graczyk, David J.; Robertson, Dale M.; Rose, William J.; Steur, Jeffrey J.

    2000-01-01

    In small streams, flow and water-quality concentrations often change quickly in response to meteorological events. Hydrologists, field technicians, or locally hired stream ob- servers involved in water-data collection are often unable to reach streams quickly enough to observe or measure these rapid changes. Therefore, in hydrologic studies designed to describe changes in water quality, a combination of manual and automated sampling methods have commonly been used manual methods when flow is relatively stable and automated methods when flow is rapidly changing. Auto- mated sampling, which makes use of equipment programmed to collect samples in response to changes in stage and flow of a stream, has been shown to be an effective method of sampling to describe the rapid changes in water quality (Graczyk and others, 1993). Because of the high cost of automated sampling, however, especially for studies examining a large number of sites, alternative methods have been considered for collecting samples during rapidly changing stream conditions. One such method employs the siphon sampler (fig. 1). also referred to as the "single-stage sampler." Siphon samplers are inexpensive to build (about $25- $50 per sampler), operate, and maintain, so they are cost effective to use at a large number of sites. Their ability to collect samples representing the average quality of water passing though the entire cross section of a stream, however, has not been fully demonstrated for many types of stream sites.

  8. An Approach to Automatic Detection and Hazard Risk Assessment of Large Protruding Rocks in Densely Forested Hilly Region

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Kawamura, K.; Manno, K.; Satoh, T.; Tachibana, K.

    2016-06-01

    Rock-fall along highways or railways presents one of the major threats to transportation and human safety. So far, the only feasible way to detect the locations of such protruding rocks located in the densely forested hilly region is by physically visiting the site and assessing the situation. Highways or railways are stretched to hundreds of kilometres; hence, this traditional approach of determining rock-fall risk zones is not practical to assess the safety throughout the highways or railways. In this research, we have utilized a state-of-the-art airborne LiDAR technology and derived a workflow to automatically detect protruding rocks in densely forested hilly regions and analysed the level of hazard risks they pose. Moreover, we also performed a 3D dynamic simulation of rock-fall to envisage the event. We validated that our proposed technique could automatically detect most of the large protruding rocks in the densely forested hilly region. Automatic extraction of protruding rocks and proper risk zoning could be used to identify the most crucial place that needs the proper protection measures. Hence, the proposed technique would provide an invaluable support for the management and planning of highways and railways safety, especially in the forested hilly region.

  9. Higher Education Quality Assessment in China: An Impact Study

    ERIC Educational Resources Information Center

    Liu, Shuiyun

    2015-01-01

    This research analyses an external higher education quality assessment scheme in China, namely, the Quality Assessment of Undergraduate Education (QAUE) scheme. Case studies were conducted in three Chinese universities with different statuses. Analysis shows that the evaluated institutions responded to the external requirements of the QAUE…

  10. Different Academics' Characteristics, Different Perceptions on Quality Assessment?

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Rosa, Maria Joao; Santos, Cristina S.

    2013-01-01

    Purpose: The purpose of this paper is to explore Portuguese academics' perceptions on higher education quality assessment objectives and purposes, in general, and on the recently implemented system for higher education quality assessment and accreditation, in particular. It aims to discuss the differences of those perceptions dependent on some…

  11. Capturing the Magic: Assessing the Quality of Youth Mentoring Relationships

    ERIC Educational Resources Information Center

    Deutsch, Nancy L.; Spencer, Renee

    2009-01-01

    Mentoring programs pose some special challenges for quality assessment because they operate at two levels: that of the dyadic relationship and that of the program. Fully assessing the quality of youth mentoring relationships requires understanding the characteristics and processes of individual relationships, which are the point of service for…

  12. Academics' Perceptions on the Purposes of Quality Assessment

    ERIC Educational Resources Information Center

    Rosa, Maria J.; Sarrico, Claudia S.; Amaral, Alberto

    2012-01-01

    The accountability versus improvement debate is an old one. Although being traditionally considered dichotomous purposes of higher education quality assessment, some authors defend the need of balancing both in quality assessment systems. This article goes a step further and contends that not only they should be balanced but also that other…

  13. Development and Validation of Assessing Quality Teaching Rubrics

    ERIC Educational Resources Information Center

    Chen, Weiyun; Mason, Steve; Hammond-Bennett, Austin; Zlamout, Sandy

    2014-01-01

    Purpose: This study aimed at examining the psychometric properties of the Assessing Quality Teaching Rubric (AQTR) that was designed to assess in-service teachers' quality levels of teaching practices in daily lessons. Methods: 45 physical education lessons taught by nine physical education teachers to students in grades K-5 were videotaped. They…

  14. Educational Quality Assessment: Manual for Interpreting School Reports.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg. Bureau of Educational Quality Assessment.

    The results of the Pennsylvania Educational Quality Assessment program, Phase II, are interpreted. The first section of the manual presents a statement of each of the Ten Goals of Quality Education which served as the basis of the assessment. Also included are the key items on the questionnaires administered to 5th and 11th grade students. The…

  15. Quality Assessment of Internationalised Studies: Theory and Practice

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  16. Assessing the Quality of a Student-Generated Question Repository

    ERIC Educational Resources Information Center

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-01-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we…

  17. Assessing quality of care for African Americans with hypertension.

    PubMed

    Peters, Rosalind M; Benkert, Ramona; Dinardo, Ellen; Templin, Thomas

    2007-01-01

    African Americans bear a disproportionate burden of hypertension. A causal-modeling design, using Donabedian's Quality Framework, tested hypothesized relationships among structure, process, and outcome variables to assess quality of care provided to this population. Structural assessment revealed that administrative and staff organization affected patients' trust in their provider and satisfaction with their care. Interpersonal process factors of racism, cultural mistrust, and trust in providers had a significant effect on satisfaction, and perceived racism had a negative effect on blood pressure (BP). Poorer quality in technical processes of care was associated with higher BP. Findings support the utility of Donabedian's framework for assessing quality of care in a disease-specific population.

  18. Quality Assurance--Best Practices for Assessing Online Programs

    ERIC Educational Resources Information Center

    Wang, Qi

    2006-01-01

    Educators have long sought to define quality in education. With the proliferation of distance education and online learning powered by the Internet, the tasks required to assess the quality of online programs become even more challenging. To assist educators and institutions in search of quality assurance methods to continuously improve their…

  19. Quality Assurance of Assessment and Moderation Discourses Involving Sessional Staff

    ERIC Educational Resources Information Center

    Grainger, Peter; Adie, Lenore; Weir, Katie

    2016-01-01

    Quality assurance is a major agenda in tertiary education. The casualisation of academic work, especially in teaching, is also a quality assurance issue. Casual or sessional staff members teach and assess more than 50% of all university courses in Australia, and yet the research in relation to the role sessional staff play in quality assurance of…

  20. Service Quality and Customer Satisfaction: An Assessment and Future Directions.

    ERIC Educational Resources Information Center

    Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen

    1999-01-01

    Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…

  1. Assessment of the Quality Management Models in Higher Education

    ERIC Educational Resources Information Center

    Basar, Gulsun; Altinay, Zehra; Dagli, Gokmen; Altinay, Fahriye

    2016-01-01

    This study involves the assessment of the quality management models in Higher Education by explaining the importance of quality in higher education and by examining the higher education quality assurance system practices in other countries. The qualitative study was carried out with the members of the Higher Education Planning, Evaluation,…

  2. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community.

  3. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. PMID:27313114

  4. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    SciTech Connect

    Li, X; Li, H; Wu, Y; Mutic, S; Yang, D

    2014-06-01

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System.

  5. Comparison of High and Low Density Airborne LIDAR Data for Forest Road Quality Assessment

    NASA Astrophysics Data System (ADS)

    Kiss, K.; Malinen, J.; Tokola, T.

    2016-06-01

    Good quality forest roads are important for forest management. Airborne laser scanning data can help create automatized road quality detection, thus avoiding field visits. Two different pulse density datasets have been used to assess road quality: high-density airborne laser scanning data from Kiihtelysvaara and low-density data from Tuusniemi, Finland. The field inventory mainly focused on the surface wear condition, structural condition, flatness, road side vegetation and drying of the road. Observations were divided into poor, satisfactory and good categories based on the current Finnish quality standards used for forest roads. Digital Elevation Models were derived from the laser point cloud, and indices were calculated to determine road quality. The calculated indices assessed the topographic differences on the road surface and road sides. The topographic position index works well in flat terrain only, while the standardized elevation index described the road surface better if the differences are bigger. Both indices require at least a 1 metre resolution. High-density data is necessary for analysis of the road surface, and the indices relate mostly to the surface wear and flatness. The classification was more precise (31-92%) than on low-density data (25-40%). However, ditch detection and classification can be carried out using the sparse dataset as well (with a success rate of 69%). The use of airborne laser scanning data can provide quality information on forest roads.

  6. AN ASSESSMENT OF AUTOMATIC SEWER FLOW SAMPLERS (EPA/600/2-75/065)

    EPA Science Inventory

    A brief review of the characteristics of storm and combined sewer flows is given followed by a general discussion of the purposes for and requirements of a sampling program. The desirable characteristics of automatic sampling equipment are set forth and problem areas are outlined...

  7. Physical and Chemical Water-Quality Data from Automatic Profiling Systems, Boulder Basin, Lake Mead, Arizona and Nevada, Water Years 2001-04

    USGS Publications Warehouse

    Rowland, Ryan C.; Westenburg, Craig L.; Veley, Ronald J.; Nylund, Walter E.

    2006-01-01

    Water-quality profile data were collected in Las Vegas Bay and near Sentinel Island in Lake Mead, Arizona and Nevada, from October 2000 to September 2004. The majority of the profiles were completed with automatic variable-buoyancy systems equipped with multiparameter water-quality sondes. Profile data near Sentinel Island were collected in August 2004 with an automatic variable-depth-winch system also equipped with a multiparameter water-quality sonde. Physical and chemical water properties collected and recorded by the profiling systems, including depth, water temperature, specific conductance, pH, dissolved-oxygen concentration, and turbidity are listed in tables and selected water-quality profile data are shown in graphs.

  8. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  9. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  10. Automatic bone age assessment for young children from newborn to 7-year-old using carpal bones.

    PubMed

    Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J

    2007-01-01

    A computer-aided-diagnosis (CAD) method has been previously developed based on features extracted from phalangeal regions of interest (ROI) in a digital hand atlas, which can assess bone age of children from ages 7 to 18 accurately. Therefore, in order to assess the bone age of children in younger ages, the inclusion of carpal bones is necessary. However, due to various factors including the uncertain number of bones appearing, non-uniformity of soft tissue, low contrast between the bony structure and soft tissue, automatic segmentation and identification of carpal bone boundaries is an extremely challenging task. Past research works on carpal bone segmentation were performed utilizing dynamic thresholding. However, due to the limitation of the segmentation algorithm, carpal bones have not been taken into consideration in the bone age assessment procedure. In this paper, we developed and implemented a knowledge-based method for fully automatic carpal bone segmentation and morphological feature analysis. Fuzzy classification was then used to assess the bone age based on the selected features. This method has been successfully applied on all cases in which carpal bones have not overlapped. CAD results of total about 205 cases from the digital hand atlas were evaluated against subject chronological age as well as readings of two radiologists. It was found that the carpal ROI provides reliable information in determining the bone age for young children from newborn to 7-year-old.

  11. Space shuttle flying qualities and criteria assessment

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, Duane T.

    1987-01-01

    Work accomplished under a series of study tasks for the Flying Qualities and Flight Control Systems Design Criteria Experiment (OFQ) of the Shuttle Orbiter Experiments Program (OEX) is summarized. The tasks involved review of applicability of existing flying quality and flight control system specification and criteria for the Shuttle; identification of potentially crucial flying quality deficiencies; dynamic modeling of the Shuttle Orbiter pilot/vehicle system in the terminal flight phases; devising a nonintrusive experimental program for extraction and identification of vehicle dynamics, pilot control strategy, and approach and landing performance metrics, and preparation of an OEX approach to produce a data archive and optimize use of the data to develop flying qualities for future space shuttle craft in general. Analytic modeling of the Orbiter's unconventional closed-loop dynamics in landing, modeling pilot control strategies, verification of vehicle dynamics and pilot control strategy from flight data, review of various existent or proposed aircraft flying quality parameters and criteria in comparison with the unique dynamic characteristics and control aspects of the Shuttle in landing; and finally a summary of conclusions and recommendations for developing flying quality criteria and design guides for future Shuttle craft.

  12. Teacher Quality and Quality Teaching: Examining the Relationship of a Teacher Assessment to Practice

    ERIC Educational Resources Information Center

    Hill, Heather C.; Umland, Kristin; Litke, Erica; Kapitula, Laura R.

    2012-01-01

    Multiple-choice assessments are frequently used for gauging teacher quality. However, research seldom examines whether results from such assessments generalize to practice. To illuminate this issue, we compare teacher performance on a mathematics assessment, during mathematics instruction, and by student performance on a state assessment. Poor…

  13. Key Elements for Judging the Quality of a Risk Assessment

    PubMed Central

    Fenner-Crisp, Penelope A.; Dellarco, Vicki L.

    2016-01-01

    Background: Many reports have been published that contain recommendations for improving the quality, transparency, and usefulness of decision making for risk assessments prepared by agencies of the U.S. federal government. A substantial measure of consensus has emerged regarding the characteristics that high-quality assessments should possess. Objective: The goal was to summarize the key characteristics of a high-quality assessment as identified in the consensus-building process and to integrate them into a guide for use by decision makers, risk assessors, peer reviewers and other interested stakeholders to determine if an assessment meets the criteria for high quality. Discussion: Most of the features cited in the guide are applicable to any type of assessment, whether it encompasses one, two, or all four phases of the risk-assessment paradigm; whether it is qualitative or quantitative; and whether it is screening level or highly sophisticated and complex. Other features are tailored to specific elements of an assessment. Just as agencies at all levels of government are responsible for determining the effectiveness of their programs, so too should they determine the effectiveness of their assessments used in support of their regulatory decisions. Furthermore, if a nongovernmental entity wishes to have its assessments considered in the governmental regulatory decision-making process, then these assessments should be judged in the same rigorous manner and be held to similar standards. Conclusions: The key characteristics of a high-quality assessment can be summarized and integrated into a guide for judging whether an assessment possesses the desired features of high quality, transparency, and usefulness. Citation: Fenner-Crisp PA, Dellarco VL. 2016. Key elements for judging the quality of a risk assessment. Environ Health Perspect 124:1127–1135; http://dx.doi.org/10.1289/ehp.1510483 PMID:26862984

  14. Assessment of Severe Apnoea through Voice Analysis, Automatic Speech, and Speaker Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.

    2009-12-01

    This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

  15. Automatic milking systems in the Protected Designation of Origin Montasio cheese production chain: effects on milk and cheese quality.

    PubMed

    Innocente, N; Biasutti, M

    2013-02-01

    Montasio cheese is a typical Italian semi-hard, semi-cooked cheese produced in northeastern Italy from unpasteurized (raw or thermised) cow milk. The Protected Designation of Origin label regulations for Montasio cheese require that local milk be used from twice-daily milking. The number of farms milking with automatic milking systems (AMS) has increased rapidly in the last few years in the Montasio production area. The objective of this study was to evaluate the effects of a variation in milking frequency, associated with the adoption of an automatic milking system, on milk quality and on the specific characteristics of Montasio cheese. Fourteen farms were chosen, all located in the Montasio production area, with an average herd size of 60 (Simmental, Holstein-Friesian, and Brown Swiss breeds). In 7 experimental farms, the cows were milked 3 times per day with an AMS, whereas in the other 7 control farms, cows were milked twice daily in conventional milking parlors (CMP). The study showed that the main components, the hygienic quality, and the cheese-making features of milk were not affected by the milking system adopted. In fact, the control and experimental milks did not reveal a statistically significant difference in fat, protein, and lactose contents; in the casein index; or in the HPLC profiles of casein and whey protein fractions. Milk from farms that used an AMS always showed somatic cell counts and total bacterial counts below the legal limits imposed by European Union regulations for raw milk. Finally, bulk milk clotting characteristics (clotting time, curd firmness, and time to curd firmness of 20mm) did not differ between milk from AMS and milk from CMP. Montasio cheese was made from milk collected from the 2 groups of farms milking either with AMS or with CMP. Three different cheese-making trials were performed during the year at different times. As expected, considering the results of the milk analysis, the moisture, fat, and protein contents of the

  16. Automatic milking systems in the Protected Designation of Origin Montasio cheese production chain: effects on milk and cheese quality.

    PubMed

    Innocente, N; Biasutti, M

    2013-02-01

    Montasio cheese is a typical Italian semi-hard, semi-cooked cheese produced in northeastern Italy from unpasteurized (raw or thermised) cow milk. The Protected Designation of Origin label regulations for Montasio cheese require that local milk be used from twice-daily milking. The number of farms milking with automatic milking systems (AMS) has increased rapidly in the last few years in the Montasio production area. The objective of this study was to evaluate the effects of a variation in milking frequency, associated with the adoption of an automatic milking system, on milk quality and on the specific characteristics of Montasio cheese. Fourteen farms were chosen, all located in the Montasio production area, with an average herd size of 60 (Simmental, Holstein-Friesian, and Brown Swiss breeds). In 7 experimental farms, the cows were milked 3 times per day with an AMS, whereas in the other 7 control farms, cows were milked twice daily in conventional milking parlors (CMP). The study showed that the main components, the hygienic quality, and the cheese-making features of milk were not affected by the milking system adopted. In fact, the control and experimental milks did not reveal a statistically significant difference in fat, protein, and lactose contents; in the casein index; or in the HPLC profiles of casein and whey protein fractions. Milk from farms that used an AMS always showed somatic cell counts and total bacterial counts below the legal limits imposed by European Union regulations for raw milk. Finally, bulk milk clotting characteristics (clotting time, curd firmness, and time to curd firmness of 20mm) did not differ between milk from AMS and milk from CMP. Montasio cheese was made from milk collected from the 2 groups of farms milking either with AMS or with CMP. Three different cheese-making trials were performed during the year at different times. As expected, considering the results of the milk analysis, the moisture, fat, and protein contents of the

  17. Evolving from Quantity to Quality: A New Yardstick for Assessment

    ERIC Educational Resources Information Center

    Fulcher, Keston H.; Orem, Chris D.

    2010-01-01

    Higher education experts tout learning outcomes assessment as a vehicle for program improvement. To this end the authors share a rubric designed explicitly to evaluate the quality of assessment and how it leads to program improvement. The rubric contains six general assessment areas, which are further broken down into 14 elements. Embedded within…

  18. Factors Influencing Assessment Quality in Higher Vocational Education

    ERIC Educational Resources Information Center

    Baartman, Liesbeth; Gulikers, Judith; Dijkstra, Asha

    2013-01-01

    The development of assessments that are fit to assess professional competence in higher vocational education requires a reconsideration of assessment methods, quality criteria and (self)evaluation. This article examines the self-evaluations of nine courses of a large higher vocational education institute. Per course, 4-11 teachers and 3-10…

  19. Doctors or technicians: assessing quality of medical education

    PubMed Central

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products. PMID:23745059

  20. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  1. Transition Assessment: Wise Practices for Quality Lives.

    ERIC Educational Resources Information Center

    Sax, Caren L.; Thoma, Colleen A.

    The 10 papers in this book attempt to provide some creative approaches to assessment of individuals with disabilities as they transition from the school experience to the adult world. The papers are: (1) "For Whom the Test Is Scored: Assessments, the School Experience, and More" (Douglas Fisher and Caren L. Sax); (2) "Person-Centered Planning:…

  2. Exploring the Notion of Quality in Quality Higher Education Assessment in a Collaborative Future

    ERIC Educational Resources Information Center

    Maguire, Kate; Gibbs, Paul

    2013-01-01

    The purpose of this article is to contribute to the debate on the notion of quality in higher education with particular focus on "objectifying through articulation" the assessment of quality by professional experts. The article gives an overview of the differentiations of quality as used in higher education. It explores a substantial piece of…

  3. New Hampshire Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of New Hampshire's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  4. Iowa Child Care Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Iowa's Child Care Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile is divided into the following categories: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family Child Care Programs;…

  5. Illinois Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Illinois' Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  6. Indiana Paths to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Indiana's Paths to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  7. Maine Quality for ME: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Maine's Quality for ME prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  8. Mississippi Quality Step System: QRS Profile. The Child Care Quality Rating System (QRS)Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Mississippi's Quality Step System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Application…

  9. Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  10. Missouri Quality Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Missouri's Quality Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  11. Miami-Dade Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Miami-Dade's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…

  12. Ohio Step Up to Quality: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Ohio's Step Up to Quality prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…

  13. Oregon Child Care Quality Indicators Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Oregon's Child Care Quality Indicators Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  14. Virginia Star Quality Initiative: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Virginia's Star Quality Initiative prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators…

  15. Air quality risk assessment and management.

    PubMed

    Chen, Yue; Craig, Lorraine; Krewski, Daniel

    2008-01-01

    This article provides (1) a synthesis of the literature on the linkages between air pollution and human health, (2) an overview of quality management approaches in Canada, the United States, and the European Union (EU), and (3) future directions for air quality research. Numerous studies examining short-term effects of air pollution show significant associations between ambient levels of particulate matter (PM) and other air pollutants and increases in premature mortality and hospitalizations for cardiovascular and respiratory illnesses. Several well-designed epidemiological studies confirmed the adverse long-term effects of PM on both mortality and morbidity. Epidemiological studies also document significant associations between ozone (O3), sulfur (SO2), and nitrogen oxides (NO(x)) and adverse health outcomes; however, the effects of gaseous pollutants are less well documented. Subpopulations that are more susceptible to air pollution include children, the elderly, those with cardiorespiratory disease, and socioeconomically deprived individuals. Canada-wide standards for ambient air concentrations of PM2.5 and O3 were set in 2000, providing air quality targets to be achieved by 2010. In the United States, the Clean Air Act provides the framework for the establishment and review of National Ambient Air Quality Standards for criteria air pollutants and the establishment of emissions standards for hazardous air pollutants. The 1996 European Union's enactment of the Framework Directive for Air Quality established the process for setting Europe-wide limit values for a series of pollutants. The Clean Air for Europe program was established by the European Union to review existing limit values, emission ceilings, and abatement protocols, as set out in the current legislation. These initiatives serve as the legislative framework for air quality management in North America and Europe.

  16. Maximising Confidence in Assessment Decision-Making: A Springboard to Quality in Assessment.

    ERIC Educational Resources Information Center

    Clayton, Berwyn; Booth, Robin; Roy, Sue

    The introduction of training packages has focused attention on the quality of assessment in the Australian vocational education and training (VET) sector on the quality of assessment. For the process of mutual recognition under the Australian Recognition Framework (ARF) to work effectively, there needs to be confidence in assessment decisions made…

  17. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  18. Online aptitude automatic surface quality inspection system for hot rolled strips steel

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan

    2005-12-01

    Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.

  19. An automatic system for the assessment of complex medium additives under cultivation conditions.

    PubMed

    Iding, K; Büntemeyer, H; Gudermann, F; Deutschmann, S M; Kionka, C; Lehmann, J

    2001-06-20

    Complex medium additives such as yeast extract or peptone are often used in industrial cell culture processes to prolong cell growth and/or to improve product formation. The quality of those supplements is dependent on the preparation method and can differ from lot to lot. To guarantee consistent production these different lots have to be tested prior to use in fermentation processes. Because a detailed qualitative and quantitative analysis of all components of such a complex mixture is a very difficult task, another assessment method has to be chosen. The best way to evaluate the effect of such supplements is to monitor cell activity during real cultivation conditions with and without the added supplement lot. A bioreactor-based test system has been developed to determine the oxygen requirement of the cells as a response to the addition of a supplement to be tested under standardized conditions. Investigations were performed with a mouse-mouse hybridoma cell line and yeast extracts as an example for complex medium additives. The results showed differences in the impact between different extract lots and between different concentrations of an extract.

  20. Quality assessment of medical research and education.

    PubMed

    Eriksson, H

    1992-01-01

    Different aspects of the process of evaluating research and education are discussed, using the discipline of medicine as a model. The focus is primarily on potential problems in the design of an evaluation. The most important aspects of an assessment are: to create confidence in the evaluation among scientists and/or teachers who are being assessed before beginning; to find experts for whom the scientists and/or teachers have professional respect; to choose assessment methods in relation to the focus, level, and objectives of the evaluation; and to make the report of the evaluation's findings short and explicit.

  1. ASSESSING WATER QUALITY: AN ENERGETICS PERPECTIVE

    EPA Science Inventory

    Integrated measures of food web dynamics could serve as important supplemental indicators of water quality that are well related with ecological integrity and environmental well-being. When the concern is a well-characterized pollutant (posing an established risk to human health...

  2. Time-averaging water quality assessment

    SciTech Connect

    Reddy, L.S.; Ormsbee, L.E.; Wood, D.J.

    1995-07-01

    While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.

  3. SAMPLING DESIGN FOR ASSESSING RECREATIONAL WATER QUALITY

    EPA Science Inventory

    Current U.S. EPA guidelines for monitoring recreatoinal water quality refer to the geometric mean density of indicator organisms, enterococci and E. coli in marine and fresh water, respectively, from at least five samples collected over a four-week period. In order to expand thi...

  4. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  5. E-Services quality assessment framework for collaborative networks

    NASA Astrophysics Data System (ADS)

    Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian

    2015-08-01

    In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.

  6. A preliminary study into performing routine tube output and automatic exposure control quality assurance using radiology information system data.

    PubMed

    Charnock, P; Jones, R; Fazakerley, J; Wilde, R; Dunn, A F

    2011-09-01

    Data are currently being collected from hospital radiology information systems in the North West of the UK for the purposes of both clinical audit and patient dose audit. Could these data also be used to satisfy quality assurance (QA) requirements according to UK guidance? From 2008 to 2009, 731 653 records were submitted from 8 hospitals from the North West England. For automatic exposure control QA, the protocol from Institute of Physics and Engineering in Medicine (IPEM) report 91 recommends that milliamperes per second can be monitored for repeatability and reproducibility using a suitable phantom, at 70-81 kV. Abdomen AP and chest PA examinations were analysed to find the most common kilovoltage used with these records then used to plot average monthly milliamperes per second with time. IPEM report 91 also recommends that a range of commonly used clinical settings is used to check output reproducibility and repeatability. For each tube, the dose area product values were plotted over time for two most common exposure factor sets. Results show that it is possible to do performance checks of AEC systems; however more work is required to be able to monitor tube output performance. Procedurally, the management system requires work and the benefits to the workflow would need to be demonstrated.

  7. [Research on Assessment Methods of Spectrum Data Quality of Core Scan].

    PubMed

    Xiu, Lian-cun; Zheng, Zhi-zhong; Yin, Liang; Chen, Chun-xia; Yu, Zheng-kui; Huang, Bin; Zhang, Qiu-ning; Xiu, Xiao-xu; Gao, Yang

    2015-08-01

    Core scan is the instrument used for core spectrum and pictures detection that has been developed in recent years. Cores' data can be digitized with this equipment, then automatic core catalog can be achieved, which provides basis for geological research, mineral deposit study and peripheral deposit prospecting. Meanwhile, an online database of cores can be established by the means of core digitalization to solve the cost problem caused by core preservation and share resources. Quality of core data measurement directly affects the mineral identification, reliability of parameter inversion results. Therefore it's very important to quasi-manage the assessment of data quality with the instrument before cores' spectrum testing services. Combined with the independent R&D of CSD350A type core scan, and on the basis of spectroscopy basic theory, spectrum analysis methods, core spectrum analysis requirements, key issues such as data quality assessment methods, evaluation criteria and target parameters has been discussed in depth, and comprehensive assessment of independent R&D of core scan has been conducted, which indicates the reliability and validity of spectrum measurements of the instrument. Experimental tests show that the methods including test parameters and items can perfectly response the measurements of reflectance spectrum, wavelength accuracy, repeatability and signal to noise ratio with the instrument. Thus the quality of core scan data can be evaluated correctly, and the foundation of data quality for commercial services can be provided. In the case of the current lack of relevant assessment criteria, the method this study proposes for the assessment has great value in the work of core spectrum measurements.

  8. Assessment of Groundwater Quality by Chemometrics.

    PubMed

    Papaioannou, Agelos; Rigas, George; Kella, Sotiria; Lokkas, Filotheos; Dinouli, Dimitra; Papakonstantinou, Argiris; Spiliotis, Xenofon; Plageras, Panagiotis

    2016-07-01

    Chemometric methods were used to analyze large data sets of groundwater quality from 18 wells supplying the central drinking water system of Larissa city (Greece) during the period 2001 to 2007 (8.064 observations) to determine temporal and spatial variations in groundwater quality and to identify pollution sources. Cluster analysis grouped each year into three temporal periods (January-April (first), May-August (second) and September-December (third). Furthermore, spatial cluster analysis was conducted for each period and for all samples, and grouped the 28 monitoring Units HJI (HJI=represent the observations of the monitoring site H, the J-year and the period I) into three groups (A, B and C). Discriminant Analysis used only 16 from the 24 parameters to correctly assign 97.3% of the cases. In addition, Factor Analysis identified 7, 9 and 8 latent factors for groups A, B and C, respectively.

  9. [Microbial indicators and fresh water quality assessment].

    PubMed

    Briancesco, Rossella

    2005-01-01

    Traditionally, the microbiological quality of waters has been measured by the analysis of indicator microorganisms. The article reviews the sanitary significance of traditional indicators of faecal contamination (total coliforms, faecal coliforms and faecal streptococci) and points out their limits. For some characteristics Escherichia coli may be considered a more useful indicator then faecal coliforms and recently it has been included in all recent laws regarding fresh, marine and drinking water. A clearer taxonomic definition of faecal streptococci evidenced the difficulty into defining a specific standard methodology of enumeration and suggested the more suitable role of enterococci as indicator microorganisms. Several current laws require the detection of enterococci. The resistance of Clostridium perfringens spores may mean that they would serve as a useful indicator of the sanitary quality of sea sediments.

  10. Method and apparatus for assessing weld quality

    DOEpatents

    Smartt, Herschel B.; Kenney, Kevin L.; Johnson, John A.; Carlson, Nancy M.; Clark, Denis E.; Taylor, Paul L.; Reutzel, Edward W.

    2001-01-01

    Apparatus for determining a quality of a weld produced by a welding device according to the present invention includes a sensor operatively associated with the welding device. The sensor is responsive to at least one welding process parameter during a welding process and produces a welding process parameter signal that relates to the at least one welding process parameter. A computer connected to the sensor is responsive to the welding process parameter signal produced by the sensor. A user interface operatively associated with the computer allows a user to select a desired welding process. The computer processes the welding process parameter signal produced by the sensor in accordance with one of a constant voltage algorithm, a short duration weld algorithm or a pulsed current analysis module depending on the desired welding process selected by the user. The computer produces output data indicative of the quality of the weld.

  11. Assessment of Groundwater Quality by Chemometrics.

    PubMed

    Papaioannou, Agelos; Rigas, George; Kella, Sotiria; Lokkas, Filotheos; Dinouli, Dimitra; Papakonstantinou, Argiris; Spiliotis, Xenofon; Plageras, Panagiotis

    2016-07-01

    Chemometric methods were used to analyze large data sets of groundwater quality from 18 wells supplying the central drinking water system of Larissa city (Greece) during the period 2001 to 2007 (8.064 observations) to determine temporal and spatial variations in groundwater quality and to identify pollution sources. Cluster analysis grouped each year into three temporal periods (January-April (first), May-August (second) and September-December (third). Furthermore, spatial cluster analysis was conducted for each period and for all samples, and grouped the 28 monitoring Units HJI (HJI=represent the observations of the monitoring site H, the J-year and the period I) into three groups (A, B and C). Discriminant Analysis used only 16 from the 24 parameters to correctly assign 97.3% of the cases. In addition, Factor Analysis identified 7, 9 and 8 latent factors for groups A, B and C, respectively. PMID:27329059

  12. ASSESSING BIOACCUMULATION FOR DERIVING NATIONAL HUMAN HEALTH WATER QUALITY CRITERIA

    EPA Science Inventory

    The United States Environmental Protection Agency is revising its methodology for deriving national ambient water quality criteria (AWQC) to protect human health. A component of this guidance involves assessing the potential for chemical bioaccumulation in commonly consumed fish ...

  13. Data sources for environmental assessment: determining availability, quality and utility

    EPA Science Inventory

    Objectives: An environmental quality index (EQI) for all counties in the United States is being developed to explore the relationship between environmental insults and human health. The EQI will be particularly useful to assess how environmental disamenities contribute to health...

  14. US Department of Energy Quality Assessment Program data evaluation report

    SciTech Connect

    Jaquish, R.E.; Kinnison, R.R.

    1984-04-01

    The results of radiochemical analysis performed on the Quality Assessment Program (QAP) test samples are presented. This report reviews the results submitted by 26 participating laboratories for 49 different radionuclide-media combinations. 5 tables. (ACR)

  15. Assessing Quality across Health Care Subsystems in Mexico

    PubMed Central

    Puig, Andrea; Pagán, José A.; Wong, Rebeca

    2012-01-01

    Recent healthcare reform efforts in Mexico have focused on the need to improve the efficiency and equity of a fragmented healthcare system. In light of these reform initiatives, there is a need to assess whether healthcare subsystems are effective at providing high-quality healthcare to all Mexicans. Nationally representative household survey data from the 2006 Encuesta Nacional de Salud y Nutrición (National Health and Nutrition Survey) were used to assess perceived healthcare quality across different subsystems. Using a sample of 7234 survey respondents, we found evidence of substantial heterogeneity in healthcare quality assessments across healthcare subsystems favoring private providers over social security institutions. These differences across subsystems remained even after adjusting for socioeconomic, demographic, and health factors. Our analysis suggests that improvements in efficiency and equity can be achieved by assessing the factors that contribute to heterogeneity in quality across subsystems. PMID:19305224

  16. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  17. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  18. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  19. 42 CFR 460.140 - Additional quality assessment activities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.140 Additional...

  20. Assessing Truck Ride Quality for Design

    SciTech Connect

    Field, R.V., Jr.

    1998-09-30

    This report summarizes a three-year project to characterize and improve the ride quality of the Department of Energy (DOE) tractor/trailer. A high-fidelity computer model was used to simulate the vibrational response in the passenger compartment of the truck due to a common roadway environment. It is the intensity of this response that is indicative of the ride quality of the vehicle. The computational model was then validated with experimental tests using a novel technique employing both lab-based modal tests and modal data derived using the Natural Excitation Technique (NExT). The validated model proved invaluable as a design tool. Utilizing the model in a predictive manner, modifications to improve ride quality were made to both the existing vehicle and the next-generation design concept. As a result, the next-generation fleet of tractors (procurement process begins in FY98) will incorporate elements of a successful model-based design for improved truck ride.

  1. Using big data for quality assessment in oncology.

    PubMed

    Broughman, James R; Chen, Ronald C

    2016-05-01

    There is increasing attention in the US healthcare system on the delivery of high-quality care, an issue central to oncology. In the report 'Crossing the Quality Chasm', the Institute of Medicine identified six aims for improving healthcare quality: safe, effective, patient-centered, timely, efficient and equitable. This article describes how current big data resources can be used to assess these six dimensions, and provides examples of published studies in oncology. Strengths and limitations of current big data resources for the evaluation of quality of care are also discussed. Finally, this article outlines a vision where big data can be used not only to retrospectively assess the quality of oncologic care, but help physicians deliver high-quality care in real time.

  2. Computer aided diagnosis system for retinal analysis: automatic assessment of the vascular tortuosity.

    PubMed

    Sánchez, L; Barreira, N; Penedo, M G; Coll De Tuero, G

    2014-01-01

    The tortuosity of a vessel, that is, how many times a vessel curves, and how these turns are, is an important value for the diagnosis of certain diseases. Clinicians analyze fundus images manually in order to estimate it, but there is many drawbacks as it is a tedious, time-consuming and subjective work. Thus, automatic image processing methods become a necessity, as they make possible the efficient computation of objective parameters. In this paper we will discuss Sirius (System for the Integration of Retinal Images Understanding Service), a web-based application that enables the storage and treatment of various types of diagnostic tests and, more specifically, its tortuosity calculation module.

  3. Technical assessment for quality control of resins

    NASA Technical Reports Server (NTRS)

    Gosnell, R. B.

    1977-01-01

    Survey visits to companies involved in the manufacture and use of graphite-epoxy prepregs were conducted to assess the factors which may contribute to variability in the mechanical properties of graphite-epoxy composites. In particular, the purpose was to assess the contributions of the epoxy resins to variability. Companies represented three segments of the composites industry - aircraft manufacturers, prepreg manufacturers, and epoxy resin manufacturers. Several important sources of performance variability were identified from among the complete spectrum of potential sources which ranged from raw materials to composite test data interpretation.

  4. Guidance on Data Quality Assessment for Life Cycle Inventory Data

    EPA Science Inventory

    Data quality within Life Cycle Assessment (LCA) is a significant issue for the future support and development of LCA as a decision support tool and its wider adoption within industry. In response to current data quality standards such as the ISO 14000 series, various entities wit...

  5. Assessing Educational Processes Using Total-Quality-Management Measurement Tools.

    ERIC Educational Resources Information Center

    Macchia, Peter, Jr.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) assessment tools in educational settings highlights and gives examples of fishbone diagrams, or cause and effect charts; Pareto diagrams; control charts; histograms and check sheets; scatter diagrams; and flowcharts. Variation and quality are discussed in terms of continuous process…

  6. Goals of Peer Assessment and Their Associated Quality Concepts

    ERIC Educational Resources Information Center

    Gielen, Sarah; Dochy, Filip; Onghena, Patrick; Struyven, Katrien; Smeets, Stijn

    2011-01-01

    The output of peer assessment in higher education has been investigated increasingly in recent decades. However, this output is evaluated against a variety of quality criteria, resulting in a cluttered picture. This article analyses the different conceptualisations of quality that appear in the literature. Discussions about the most appropriate…

  7. Quality of Religious Education in Croatia Assessed from Teachers' Perspective

    ERIC Educational Resources Information Center

    Baric, Denis; Burušic, Josip

    2015-01-01

    The main aim of the present study was to examine the quality of religious education in Croatian primary schools when assessed from teachers' perspective. Religious education teachers (N?=?226) rated the impact of certain factors on the existing quality of religious education in primary schools and expressed their expectations about the future…

  8. Measuring the Impact of Student Assessment on Institutional Quality.

    ERIC Educational Resources Information Center

    Losak, John

    Assessment programs, which have recently been implemented in colleges around the country, have indirectly affected the quality of education in ways that are both researchable and measurable. Admissions and placement testing affect educational quality by separating high and low achievers, and making it possible for high-level texts to be used and…

  9. River Pollution: Part II. Biological Methods for Assessing Water Quality.

    ERIC Educational Resources Information Center

    Openshaw, Peter

    1984-01-01

    Discusses methods used in the biological assessment of river quality and such indicators of clean and polluted waters as the Trent Biotic Index, Chandler Score System, and species diversity indexes. Includes a summary of a river classification scheme based on quality criteria related to water use. (JN)

  10. Poster — Thur Eve — 70: Automatic lung bronchial and vessel bifurcations detection algorithm for deformable image registration assessment

    SciTech Connect

    Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane; Chav, Ramnada; De Guise, Jacques

    2014-08-15

    Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm based on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.

  11. Automatic Blocked Roads Assessment after Earthquake Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.; Hosseini-Zirdoo, E.; Eslamizade, F.

    2015-12-01

    In 2010, an earthquake in the city of Port-au-Prince, Haiti, happened quite by chance an accident and killed over 300000 people. According to historical data such an earthquake has not occurred in the area. Unpredictability of earthquakes has necessitated the need for comprehensive mitigation efforts to minimize deaths and injuries. Blocked roads, caused by debris of destroyed buildings, may increase the difficulty of rescue activities. In this case, a damage map, which specifies blocked and unblocked roads, can be definitely helpful for a rescue team. In this paper, a novel method for providing destruction map based on pre-event vector map and high resolution world view II satellite images after earthquake, is presented. For this purpose, firstly in pre-processing step, image quality improvement and co-coordination of image and map are performed. Then, after extraction of texture descriptor from the image after quake and SVM classification, different terrains are detected in the image. Finally, considering the classification results, specifically objects belong to "debris" class, damage analysis are performed to estimate the damage percentage. In this case, in addition to the area objects in the "debris" class their shape should also be counted. The aforementioned process are performed on all the roads in the road layer.In this research, pre-event digital vector map and post-event high resolution satellite image, acquired by Worldview-2, of the city of Port-au-Prince, Haiti's capital, were used to evaluate the proposed method. The algorithm was executed on 1200×800 m2 of the data set, including 60 roads, and all the roads were labelled correctly. The visual examination have authenticated the abilities of this method for damage assessment of urban roads network after an earthquake.

  12. Assumptions Commonly Underlying Government Quality Assessment Practices

    ERIC Educational Resources Information Center

    Schmidtlein, Frank A.

    2004-01-01

    The current interest in governmental assessment and accountability practices appears to result from:(1) an emerging view of higher education as an "industry"; (2) concerns about efficient resource allocation; (3) a lack of trust ade between government institutional officials; (4) a desire to reduce uncertainty in government/higher education…

  13. Developing Quality Physical Education through Student Assessments

    ERIC Educational Resources Information Center

    Fisette, Jennifer L.; Placek, Judith H.; Avery, Marybell; Dyson, Ben; Fox, Connie; Franck, Marian; Graber, Kim; Rink, Judith; Zhu, Weimo

    2009-01-01

    The National Association of Sport and Physical Education (NASPE) is committed to providing teachers with the support and guiding principles for implementing valid assessments. Its goal is for physical educators to utilize PE Metrics to measure student learning based on the national standards. The first PE Metrics text provides teachers with…

  14. Quality assessment of plant transpiration water

    NASA Technical Reports Server (NTRS)

    Macler, Bruce A.; Janik, Daniel S.; Benson, Brian L.

    1990-01-01

    It has been proposed to use plants as elements of biologically-based life support systems for long-term space missions. Three roles have been brought forth for plants in this application: recycling of water, regeneration of air and production of food. This report discusses recycling of water and presents data from investigations of plant transpiration water quality. Aqueous nutrient solution was applied to several plant species and transpired water collected. The findings indicated that this water typically contained 0.3-6 ppm of total organic carbon, which meets hygiene water standards for NASA's space applications. It suggests that this method could be developed to achieve potable water standards.

  15. Cloud-Based Smart Health Monitoring System for Automatic Cardiovascular and Fall Risk Assessment in Hypertensive Patients.

    PubMed

    Melillo, P; Orrico, A; Scala, P; Crispino, F; Pecchia, L

    2015-10-01

    The aim of this paper is to describe the design and the preliminary validation of a platform developed to collect and automatically analyze biomedical signals for risk assessment of vascular events and falls in hypertensive patients. This m-health platform, based on cloud computing, was designed to be flexible, extensible, and transparent, and to provide proactive remote monitoring via data-mining functionalities. A retrospective study was conducted to train and test the platform. The developed system was able to predict a future vascular event within the next 12 months with an accuracy rate of 84 % and to identify fallers with an accuracy rate of 72 %. In an ongoing prospective trial, almost all the recruited patients accepted favorably the system with a limited rate of inadherences causing data losses (<20 %). The developed platform supported clinical decision by processing tele-monitored data and providing quick and accurate risk assessment of vascular events and falls. PMID:26276015

  16. Validation of balance-quality assessment using a modified bathroom scale.

    PubMed

    Hewson, D J; Duchêne, J; Hogrel, J-Y

    2015-02-01

    The balance quality tester (BQT), based on a standard electronic bathroom scale has been developed in order to assess balance quality. The BQT includes automatic detection of the person to be tested by means of an infrared detector and bluetooth communication capability for remote assessment when linked to a long-distance communication device such as a mobile phone. The BQT was compared to a standard force plate for validity and agreement. The two most widely reported parameters in balance literature, the area of the centre of pressure (COP) displacement and the velocity of the COP displacement, were compared for 12 subjects, each of whom was tested on ten occasions on each of the 2 days. No significant differences were observed between the BQT and the force plate for either of the two parameters. In addition a high level of agreement was observed between both devices. The BQT is a valid device for remote assessment of balance quality, and could provide a useful tool for long-term monitoring of people with balance problems, particularly during home monitoring.

  17. Floristic Quality Assessment Across the Nation: Status, Opportunities, and Challenges

    EPA Science Inventory

    Floristic Quality Assessment (FQA) will be considered in the USEPA National Wetland Condition Assessment (NWCA). FQA is a powerful tool to describe wetland ecological condition, and is based on Coefficients of Conservatism (CC) of individual native plant species. CCs rank sensiti...

  18. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  19. School Indoor Air Quality Assessment and Program Implementation.

    ERIC Educational Resources Information Center

    Prill, R.; Blake, D.; Hales, D.

    This paper describes the effectiveness of a three-step indoor air quality (IAQ) program implemented by 156 schools in the states of Washington and Idaho during the 2000-2001 school year. An experienced IAQ/building science specialist conducted walk-through assessments at each school. These assessments documented deficiencies and served as an…

  20. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  1. Evolution of quality and surgical risk assessment in the USA.

    PubMed

    Depalma, Ralph G

    2011-04-01

    As health-care reforms progress, quality and risk assessment in the health-care system of the USA surface as critical issues. This review considers past, present and possible future changes in quality assessment along with formal programs of complication reduction and pay for performance (PFP) as related to surgery and vascular interventions. Strategies for quality improvement include aggregate and risk-adjusted outcome measurement, process compliance with the Surgical Complication Improvement Program, oversight and PFP, now policies of the Centers for Medicare and Medicaid Services (CMS). Advantages, disadvantages and unintended consequences of these policies are discussed. While ongoing system changes will influence vascular surgical practice, unique opportunities and obligations exist for vascular surgeons to contribute to quality assessment of their interventions, to evaluate long-term outcomes and to devise strategies for comprehensive cost-effective care for the conditions affecting patients with vascular disease. PMID:21489931

  2. Quality Assessment of TPB-Based Questionnaires: A Systematic Review

    PubMed Central

    Oluka, Obiageli Crystal; Nie, Shaofa; Sun, Yi

    2014-01-01

    Objective This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. Methods A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. Results 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. Conclusion Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics) in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process. PMID:24722323

  3. The Automatic Assessment of Free Text Answers Using a Modified BLEU Algorithm

    ERIC Educational Resources Information Center

    Noorbehbahani, F.; Kardan, A. A.

    2011-01-01

    e-Learning plays an undoubtedly important role in today's education and assessment is one of the most essential parts of any instruction-based learning process. Assessment is a common way to evaluate a student's knowledge regarding the concepts related to learning objectives. In this paper, a new method for assessing the free text answers of…

  4. Southwest Principal Aquifers Regional Ground-Water Quality Assessment

    USGS Publications Warehouse

    Anning, D.W.; Thiros, S.A.; Bexfield, L.M.; McKinney, T.S.; Green, J.M.

    2009-01-01

    The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey is conducting a regional analysis of water quality in the principal aquifers in the southwestern United States. The Southwest Principal Aquifers (SWPA) study is building a better understanding of the susceptibility and vulnerability of basin-fill aquifers in the region to ground-water contamination by synthesizing the baseline knowledge of ground-water quality conditions in 15 basins previously studied by the NAWQA Program. The improved understanding of aquifer susceptibility and vulnerability to contamination is assisting in the development of tools that water managers can use to assess and protect the quality of ground-water resources. This fact sheet provides an overview of the basin-fill aquifers in the southwestern United States and description of the completed and planned regional analyses of ground-water quality being performed by the SWPA study.

  5. Assessing quality management in an R and D environment

    SciTech Connect

    Thompson, B.D.

    1998-02-01

    Los Alamos National Laboratory (LANL) is a premier research and development institution operated by the University of California for the US Department of Energy. Since 1991, LANL has pursued a heightened commitment to developing world-class quality in management and operations. In 1994 LANL adopted the Malcolm Baldrige National Quality Award criteria as a framework for all activities and initiated more formalized customer focus and quality management. Five measurement systems drive the current integration of quality efforts: an annual Baldrige-based assessment, a customer focus program, customer-driven performance measurement, an employee performance management system and annual employee surveys, and integrated planning processes with associated goals and measures.

  6. Development of a 3D optical scanning-based automatic quality assurance system for proton range compensators

    SciTech Connect

    Kim, MinKyu; Ju, Sang Gyu E-mail: doho.choi@samsung.com; Chung, Kwangzoo; Hong, Chae-Seon; Kim, Jinsung; Ahn, Sung Hwan; Jung, Sang Hoon; Han, Youngyih; Chung, Yoonsun; Cho, Sungkoo; Choi, Doo Ho E-mail: doho.choi@samsung.com; Kim, Jungkuk; Shin, Dongho

    2015-02-15

    Purpose: A new automatic quality assurance (AutoRCQA) system using a three-dimensional scanner (3DS) with system automation was developed to improve the accuracy and efficiency of the quality assurance (QA) procedure for proton range compensators (RCs). The system performance was evaluated for clinical implementation. Methods: The AutoRCQA system consists of a three-dimensional measurement system (3DMS) based on 3DS and in-house developed verification software (3DVS). To verify the geometrical accuracy, the planned RC data (PRC), calculated with the treatment planning system (TPS), were reconstructed and coregistered with the measured RC data (MRC) based on the beam isocenter. The PRC and MRC inner surfaces were compared with composite analysis (CA) using 3DVS, using the CA pass rate for quantitative analysis. To evaluate the detection accuracy of the system, the authors designed a fake PRC by artificially adding small cubic islands with side lengths of 1.5, 2.5, and 3.5 mm on the inner surface of the PRC and performed CA with the depth difference and distance-to-agreement tolerances of [1 mm, 1 mm], [2 mm, 2 mm], and [3 mm, 3 mm]. In addition, the authors performed clinical tests using seven RCs [computerized milling machine (CMM)-RCs] manufactured by CMM, which were designed for treating various disease sites. The systematic offsets of the seven CMM-RCs were evaluated through the automatic registration function of AutoRCQA. For comparison with conventional technique, the authors measured the thickness at three points in each of the seven CMM-RCs using a manual depth measurement device and calculated thickness difference based on the TPS data (TPS-manual measurement). These results were compared with data obtained from 3DVS. The geometrical accuracy of each CMM-RC inner surface was investigated using the TPS data by performing CA with the same criteria. The authors also measured the net processing time, including the scan and analysis time. Results: The Auto

  7. 42 CFR 438.240 - Quality assessment and performance improvement program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and... PIHP have an ongoing quality assessment and performance improvement program for the services it... and overutilization of services. (4) Have in effect mechanisms to assess the quality...

  8. [Systematic economic assessment and quality evaluation for traditional Chinese medicines].

    PubMed

    Sun, Xiao; Guo, Li-ping; Shang, Hong-cai; Ren, Ming; Lei, Xiang

    2015-05-01

    To learn about the economic studies on traditional Chinese medicines in domestic literatures, in order to analyze the current economic assessment of traditional Chinese medicines and explore the existing problems. Efforts were made to search CNKI, VIP, Wanfang database and CBM by computer and include all literatures about economic assessment of traditional Chinese medicines published on professional domestic journals in the systematic assessment and quality evaluation. Finally, 50 articles were included in the study, and the systematic assessment and quality evaluation were made for them in terms of titles, year, authors' identity, expense source, disease type, study perspective, study design type, study target, study target source, time limit, cost calculation, effect indicator, analytical technique and sensitivity analysis. The finally quality score was 0.74, which is very low. The results of the study showed insufficient studies on economics of traditional Chinese medicines, short study duration and simple evaluation methods, which will be solved through unremitting efforts in the future.

  9. [Systematic economic assessment and quality evaluation for traditional Chinese medicines].

    PubMed

    Sun, Xiao; Guo, Li-ping; Shang, Hong-cai; Ren, Ming; Lei, Xiang

    2015-05-01

    To learn about the economic studies on traditional Chinese medicines in domestic literatures, in order to analyze the current economic assessment of traditional Chinese medicines and explore the existing problems. Efforts were made to search CNKI, VIP, Wanfang database and CBM by computer and include all literatures about economic assessment of traditional Chinese medicines published on professional domestic journals in the systematic assessment and quality evaluation. Finally, 50 articles were included in the study, and the systematic assessment and quality evaluation were made for them in terms of titles, year, authors' identity, expense source, disease type, study perspective, study design type, study target, study target source, time limit, cost calculation, effect indicator, analytical technique and sensitivity analysis. The finally quality score was 0.74, which is very low. The results of the study showed insufficient studies on economics of traditional Chinese medicines, short study duration and simple evaluation methods, which will be solved through unremitting efforts in the future. PMID:26390672

  10. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    NASA Astrophysics Data System (ADS)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  11. Assessment of mesh simplification algorithm quality

    NASA Astrophysics Data System (ADS)

    Roy, Michael; Nicolier, Frederic; Foufou, S.; Truchetet, Frederic; Koschan, Andreas; Abidi, Mongi A.

    2002-03-01

    Traditionally, medical geneticists have employed visual inspection (anthroposcopy) to clinically evaluate dysmorphology. In the last 20 years, there has been an increasing trend towards quantitative assessment to render diagnosis of anomalies more objective and reliable. These methods have focused on direct anthropometry, using a combination of classical physical anthropology tools and new instruments tailor-made to describe craniofacial morphometry. These methods are painstaking and require that the patient remain still for extended periods of time. Most recently, semiautomated techniques (e.g., structured light scanning) have been developed to capture the geometry of the face in a matter of seconds. In this paper, we establish that direct anthropometry and structured light scanning yield reliable measurements, with remarkably high levels of inter-rater and intra-rater reliability, as well as validity (contrasting the two methods).

  12. Myocardial Iron Loading Assessment by Automatic Left Ventricle Segmentation with Morphological Operations and Geodesic Active Contour on T2* images

    NASA Astrophysics Data System (ADS)

    Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng

    2015-07-01

    Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.

  13. Myocardial Iron Loading Assessment by Automatic Left Ventricle Segmentation with Morphological Operations and Geodesic Active Contour on T2* images

    PubMed Central

    Luo, Yun-gang; Ko, Jacky KL; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie CW; Wang, Defeng

    2015-01-01

    Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading. PMID:26215336

  14. Constructing Assessment Model of Primary and Secondary Educational Quality with Talent Quality as the Core Standard

    ERIC Educational Resources Information Center

    Chen, Benyou

    2014-01-01

    Quality is the core of education and it is important to standardization construction of primary and secondary education in urban (U) and rural (R) areas. The ultimate goal of the integration of urban and rural education is to pursuit quality urban and rural education. Based on analysing the related policy basis and the existing assessment models…

  15. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    NASA Astrophysics Data System (ADS)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  16. Methods for assessing the quality of runoff from Minnesota peatlands

    SciTech Connect

    Clausen, J.C.

    1981-01-01

    The quality of runoff from large, undisturbed peatlands in Minnesota is chaaracterized and sampling results from a number of bogs (referred to as a multiple watershed approach) was used to assess the effects of peat mining on the quality of bog runoff. Runoff from 45 natural peatlands and one mined bog was sampled five times in 1979-80 and analyzed for 34 water quality characteristics. Peatland watersheds were classified as bog, transition, or fen, based upon both water quality and watershed characteristics. Alternative classification methods were based on frequency distributions, cluster analysis, discriminant analysis, and principal component analysis results. A multiple watershed approach was used as a basis of drawing inferences regarding the quality of runoff from a representative sample of natural bogs and a mined bog. The multiple watershed technique applied provides an alternative to long-term paired watershed experiments in evaluating the effects of land use activities on the quality of runoff from peatlands in Minnesota.

  17. Analysis, assessment and mapping of groundwater quality of Chandigarh (India).

    PubMed

    Bansal, Rajesh; Sharma, L N; John, Siby

    2011-04-01

    Chandigarh (India) has been depending on groundwater resources to meet its water requirements in addition to the surface water source (Bhakra Main Canal). With a view to assess the groundwater quality, samples were collected from geo-referenced tube wells in different localities of the city. Samples were analysed for conventional parameters indicative of the physico-chemical quality of groundwater. The groundwater quality mapping was attempted using the ARCGIS 9.0. Thematic maps were generated for each parameter of groundwater quality. This paper presents the spatial distribution of groundwater quality of Chandigarh city. The quality of groundwater was found to be varying with geology of the area as well as the land use and land cover.

  18. [Assessment of life quality in children with spina bifida].

    PubMed

    Król, Marianna; Sibiński, Marcin; Stefański, Maciej; Synder, Marek

    2011-01-01

    The aim of the study was identification and assessment of factors influencing quality of life in children with spina bifida. There were 33 children in the study (19 girls and 14 boys) in the age from 5 to 20 years. They were divided into 2 groups: first in the age from 5 to 12 years (17 patients) and second in the age from 13 to 20 years (16 patients). The Health-related Quality of Life in Spina Bifida Questionnaire and questionnaire done by us were used for the study. Younger children had average score of 158 points and older children average 186 points. In the whole group 64% of children assessed they quality of life as good, 30% as very good, 6% as average. None of our patients think that they quality of life is poor or very poor. Presence of visual perception difficulties in younger group and non-ambulation in alder children is related to poorer assessment of quality of life. Alder children that live in a house had better assessment of quality of life than children living in blocks of flats. Vast majority of children with spina bifida have good specialist medical care. Most common concomitant diseases are hydrocephalus and neurogenic urinary bladder.

  19. Assessing the quality of a student-generated question repository

    NASA Astrophysics Data System (ADS)

    Bates, Simon P.; Galloway, Ross K.; Riise, Jonathan; Homer, Danny

    2014-12-01

    We present results from a study that categorizes and assesses the quality of questions and explanations authored by students in question repositories produced as part of the summative assessment in introductory physics courses over two academic sessions. Mapping question quality onto the levels in the cognitive domain of Bloom's taxonomy, we find that students produce questions of high quality. More than three-quarters of questions fall into categories beyond simple recall, in contrast to similar studies of student-authored content in different subject domains. Similarly, the quality of student-authored explanations for questions was also high, with approximately 60% of all explanations classified as being of high or outstanding quality. Overall, 75% of questions met combined quality criteria, which we hypothesize is due in part to the in-class scaffolding activities that we provided for students ahead of requiring them to author questions. This work presents the first systematic investigation into the quality of student produced assessment material in an introductory physics context, and thus complements and extends related studies in other disciplines.

  20. Quality assurance checks on ecological risk assessments

    SciTech Connect

    Ferson, S.; Ginzburg, L.

    1995-12-31

    Three major criticisms are routinely made against probabilistic ecological risk assessments: (1) input distributions are often not available, (2) correlations and dependencies are often ignored, and (3) mathematical structure of the ecological model is often questionable. These criticisms are well understood by risk analysts, but it is generally assumed that their only solution is additional empirical effort to develop input distributions, measure correlations and validate the model. As a practical matter, since such empirical information is typically incomplete (and indeed often quite sparse), analysts are forced to make assumptions without empirical justifications. There are, however, computational methods that may allow analysts to sidestep a lack of information to partially or completely answer the three criticisms. When empirical information about the input distributions is limited, comprehensive representations of uncertainty can be estimated using traditional confidence interval or bounding procedures. Using recently developed methods, the probability distribution bounds can be used directly in calculations. When the correlation and dependency structure among variables is unknown, bounds on solutions can be computed without having to make unjustified and possibly false assumptions about independence. Finally, automated checks on the ecological model or mathematical expression used in the risk analysis can be employed to ensure the absence of several classes of structural and mathematical errors. Several kinds of profound errors which are routinely committed in practice, including dimensional or unit discordance, infeasible configurations for correlation, and multiple instantiations of a repeated variable, can all be detected using currently available methods and software.

  1. Human Variome Project Quality Assessment Criteria for Variation Databases.

    PubMed

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance.

  2. Assessing sound exposure from shipping in coastal waters using a single hydrophone and Automatic Identification System (AIS) data.

    PubMed

    Merchant, Nathan D; Witt, Matthew J; Blondel, Philippe; Godley, Brendan J; Smith, George H

    2012-07-01

    Underwater noise from shipping is a growing presence throughout the world's oceans, and may be subjecting marine fauna to chronic noise exposure with potentially severe long-term consequences. The coincidence of dense shipping activity and sensitive marine ecosystems in coastal environments is of particular concern, and noise assessment methodologies which describe the high temporal variability of sound exposure in these areas are needed. We present a method of characterising sound exposure from shipping using continuous passive acoustic monitoring combined with Automatic Identification System (AIS) shipping data. The method is applied to data recorded in Falmouth Bay, UK. Absolute and relative levels of intermittent ship noise contributions to the 24-h sound exposure level are determined using an adaptive threshold, and the spatial distribution of potential ship sources is then analysed using AIS data. This technique can be used to prioritize shipping noise mitigation strategies in coastal marine environments. PMID:22658576

  3. Automatic assessment of a two-phase structure in the duplex stainless-steel SAF 2205

    SciTech Connect

    Komenda, J. ); Sandstroem, R. )

    1993-10-01

    Automatic image analysis was used to study the effect of deformation on the size and distribution of the austenite and ferrite phases in the duplex stainless steel SAF 2205 (22Cr-5Ni-3Mo-15N). The main parameters used were the chord size to characterize the ferrite phase and Feret's diameter for the austenite phase. As the deformation increased, ferrite bands became more elongated and thinner, contributing to a pronounced banding. The amount of banding can be quantified by using a ratio between the slopes of the chord size distributions in the longitudinal and short transverse directions. According to a proposed model of the influence of deformation on the two-phase structure, the process of austenite elongation and subdivision of austenite islands (crushing) is described. The effect of deformation on the yield and tensile strength was expressed using a Hall-Petch type relationship where the grain size was represented by the average width of the ferrite bands. The observed anisotropy in strength properties is believed to be due to texture hardening. Because elongation at a given strength level is the same in both the longitudinal and transverse directions, the banding itself does not influence the ductility. Nor can the strength anisotropy be due to banding, because the strength is greater in the longitudinal than in the transverse direction.

  4. Quality assessment of malaria laboratory diagnosis in South Africa.

    PubMed

    Dini, Leigh; Frean, John

    2003-01-01

    To assess the quality of malaria diagnosis in 115 South African laboratories participating in the National Health Laboratory Service Parasitology External Quality Assessment Programme we reviewed the results from 7 surveys from January 2000 to August 2002. The mean percentage incorrect result rate was 13.8% (95% CI 11.3-16.9%), which is alarmingly high, with about 1 in 7 blood films being incorrectly interpreted. Most participants with incorrect blood film interpretations had acceptable Giemsa staining quality, indicating that there is less of a problem with staining technique than with blood film interpretation. Laboratories in provinces in which malaria is endemic did not necessarily perform better than those in non-endemic areas. The results clearly suggest that malaria laboratory diagnosis throughout South Africa needs strengthening by improving laboratory standardization and auditing, training, quality assurance and referral resources. PMID:16117961

  5. Quality Assessment and Physicochemical Characteristics of Bran Enriched Chapattis

    PubMed Central

    Dar, B. N.; Sharma, Savita; Singh, Baljit; Kaur, Gurkirat

    2014-01-01

    Cereal brans singly and in combination were blended at varying levels (5 and 10%) for development of Chapattis. Cereal bran enriched Chapattis were assessed for quality and physicochemical characteristics. On the basis of quality assessment, 10% enrichment level for Chapatti was the best. Moisture content, water activity, and free fatty acids remained stable during the study period. Quality assessment and physicochemical characteristics of bran enriched Chapattis carried out revealed that dough handling and puffing of bran enriched Chapattis prepared by 5 and 10% level of bran supplementation did not vary significantly. All types of bran enriched Chapattis except rice bran enriched Chapattis showed nonsticky behavior during dough handling. Bran enriched Chapattis exhibited full puffing character during preparation. The sensory attributes showed that both 5 and 10% bran supplemented Chapattis were acceptable. PMID:26904644

  6. Quality-of-life assessment techniques for veterinarians.

    PubMed

    Villalobos, Alice E

    2011-05-01

    The revised veterinary oath commits the profession to the prevention and relief of animal suffering. There is a professional obligation to properly assess quality of life (QoL) and confront the issues that ruin it, such as undiagnosed suffering. There are no clinical studies in the arena of QoL assessment at the end of life for pets. This author developed a user-friendly QoL scale to help make proper assessments and decisions along the way to the conclusion of a terminal patient's life. This article discusses decision aids and establishes commonsense techniques to assess a pet's QoL.

  7. Assessment of permeation quality of concrete through mercury intrusion porosimetry

    SciTech Connect

    Kumar, Rakesh; Bhattacharjee, B

    2004-02-01

    Permeation quality of laboratory cast concrete beams was determined through initial surface absorption test (ISAT). The pore system characteristics of the same concrete beam specimens were determined through mercury intrusion porosimetry (MIP). Data so obtained on the measured initial surface absorption rate of water by concrete and characteristics of pore system of concrete estimated from porosimetry results were used to develop correlations between them. Through these correlations, potential of MIP in assessing the durability quality of concrete in actual structure is demonstrated.

  8. Space Shuttle flying qualities and flight control system assessment study

    NASA Technical Reports Server (NTRS)

    Myers, T. T.; Johnston, D. E.; Mcruer, D.

    1982-01-01

    The suitability of existing and proposed flying quality and flight control system criteria for application to the space shuttle orbiter during atmospheric flight phases was assessed. An orbiter experiment for flying qualities and flight control system design criteria is discussed. Orbiter longitudinal and lateral-directional flying characteristics, flight control system lag and time delay considerations, and flight control manipulator characteristics are included. Data obtained from conventional aircraft may be inappropriate for application to the shuttle orbiter.

  9. The Information Quality Triangle: a methodology to assess clinical information quality.

    PubMed

    Choquet, Rémy; Qouiyd, Samiha; Ouagne, David; Pasche, Emilie; Daniel, Christel; Boussaïd, Omar; Jaulent, Marie-Christine

    2010-01-01

    Building qualitative clinical decision support or monitoring based on information stored in clinical information (or EHR) systems cannot be done without assessing and controlling information quality. Numerous works have introduced methods and measures to qualify and enhance data, information models and terminologies quality. This paper introduces an approach based on an Information Quality Triangle that aims at providing a generic framework to help in characterizing quality measures and methods in the context of the integration of EHR data in a clinical datawarehouse. We have successfully experimented the proposed approach at the HEGP hospital in France, as part of the DebugIT EU FP7 project.

  10. Image enhancement and quality measures for dietary assessment using mobile devices

    NASA Astrophysics Data System (ADS)

    Xu, Chang; Zhu, Fengqing; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2012-03-01

    Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We are developing a system, known as the mobile device food record (mdFR), to automatically identify and quantify foods and beverages consumed based on analyzing meal images captured with a mobile device. The mdFR makes use of a fiducial marker and other contextual information to calibrate the imaging system so that accurate amounts of food can be estimated from the scene. Food identification is a difficult problem since foods can dramatically vary in appearance. Such variations may arise not only from non-rigid deformations and intra-class variability in shape, texture, color and other visual properties, but also from changes in illumination and viewpoint. To address the color consistency problem, this paper describes illumination quality assessment methods implemented on a mobile device and three post color correction methods.

  11. Machine learning approach for objective inpainting quality assessment

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Agaian, S.; Egiazarian, K.

    2014-05-01

    This paper focuses on a machine learning approach for objective inpainting quality assessment. Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. Quantitative metrics for successful image inpainting currently do not exist; researchers instead are relying upon qualitative human comparisons in order to evaluate their methodologies and techniques. We present an approach for objective inpainting quality assessment based on natural image statistics and machine learning techniques. Our method is based on observation that when images are properly normalized or transferred to a transform domain, local descriptors can be modeled by some parametric distributions. The shapes of these distributions are different for noninpainted and inpainted images. Approach permits to obtain a feature vector strongly correlated with a subjective image perception by a human visual system. Next, we use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value repeatably correlates with a qualitative opinion in a human observer study.

  12. Recreational stream assessment using Malaysia water quality index

    NASA Astrophysics Data System (ADS)

    Ibrahim, Hanisah; Kutty, Ahmad Abas

    2013-11-01

    River water quality assessment is crucial in order to quantify and monitor spatial and temporally. Malaysia is producing WQI and NWQS indices to evaluate river water quality. However, the study on recreational river water quality is still scarce. A study was conducted to determine selected recreational river water quality area and to determine impact of recreation on recreational stream. Three recreational streams namely Sungai Benus, Sungai Cemperuh and Sungai Luruh in Janda Baik, Pahang were selected. Five sampling stations were chosen from each river with a 200-400 m interval. Six water quality parameters which are BOD5, COD, TSS, pH, ammoniacal-nitrogen and dissolved oxygen were measured. Sampling and analysis was conducted following standard method prepared by USEPA. These parameters were used to calculate the water quality subindex and finally an indicative WQI value using Malaysia water quality index formula. Results indicate that all recreational streams have excellent water quality with WQI values ranging from 89 to 94. Most of water quality parameter was homogenous between sampling sites and between streams. An one-way ANOVA test indicates that no significant difference was observed between each sub index values (p> 0.05, α=0.05). Only BOD and COD exhibit slightly variation between stations that would be due to organic domestic wastes done by visitors. The study demonstrated that visitors impact on recreational is minimum and recreation streams are applicable for direct contact recreational.

  13. Assessment of foodservice quality and identification of improvement strategies using hospital foodservice quality model

    PubMed Central

    Kim, Kyungjoo; Kim, Minyoung

    2010-01-01

    The purposes of this study were to assess hospital foodservice quality and to identify causes of quality problems and improvement strategies. Based on the review of literature, hospital foodservice quality was defined and the Hospital Foodservice Quality model was presented. The study was conducted in two steps. In Step 1, nutritional standards specified on diet manuals and nutrients of planned menus, served meals, and consumed meals for regular, diabetic, and low-sodium diets were assessed in three general hospitals. Quality problems were found in all three hospitals since patients consumed less than their nutritional requirements. Considering the effects of four gaps in the Hospital Foodservice Quality model, Gaps 3 and 4 were selected as critical control points (CCPs) for hospital foodservice quality management. In Step 2, the causes of the gaps and improvement strategies at CCPs were labeled as "quality hazards" and "corrective actions", respectively and were identified using a case study. At Gap 3, inaccurate forecasting and a lack of control during production were identified as quality hazards and corrective actions proposed were establishing an accurate forecasting system, improving standardized recipes, emphasizing the use of standardized recipes, and conducting employee training. At Gap 4, quality hazards were menus of low preferences, inconsistency of menu quality, a lack of menu variety, improper food temperatures, and patients' lack of understanding of their nutritional requirements. To reduce Gap 4, the dietary departments should conduct patient surveys on menu preferences on a regular basis, develop new menus, especially for therapeutic diets, maintain food temperatures during distribution, provide more choices, conduct meal rounds, and provide nutrition education and counseling. The Hospital Foodservice Quality Model was a useful tool for identifying causes of the foodservice quality problems and improvement strategies from a holistic point of view

  14. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol. PMID:25604967

  15. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.

  16. How to assess the quality of your analytical method?

    PubMed

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.

  17. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  18. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images.

    PubMed

    Medhi, Jyoti Prakash; Dandapat, Samarendra

    2016-07-01

    Prolonged diabetes causes severe damage to the vision through leakage of blood and blood constituents over the retina. The effect of the leakage becomes more threatening when these abnormalities involve the macula. This condition is known as diabetic maculopathy and it leads to blindness, if not treated in time. Early detection and proper diagnosis can help in preventing this irreversible damage. To achieve this, the possible way is to perform retinal screening at regular intervals. But the ratio of ophthalmologists to patients is very small and the process of evaluation is time consuming. Here, the automatic methods for analyzing retinal/fundus images prove handy and help the ophthalmologists to screen at a faster rate. Motivated from this aspect, an automated method for detection and analysis of diabetic maculopathy is proposed in this work. The method is implemented in two stages. The first stage involves preprocessing required for preparing the image for further analysis. During this stage the input image is enhanced and the optic disc is masked to avoid false detection during bright lesion identification. The second stage is maculopathy detection and its analysis. Here, the retinal lesions including microaneurysms, hemorrhages and exudates are identified by processing the green and hue plane color images. The macula and the fovea locations are determined using intensity property of processed red plane image. Different circular regions are thereafter marked in the neighborhood of the macula. The presence of lesions in these regions is identified to confirm positive maculopathy. Later, the information is used for evaluating its severity. The principal advantage of the proposed algorithm is, utilization of the relation of blood vessels with optic disc and macula, which enhances the detection process. Proper usage of various color plane information sequentially enables the algorithm to perform better. The method is tested on various publicly available databases

  19. Forensic mental health assessment in France: recommendations for quality improvement.

    PubMed

    Combalbert, Nicolas; Andronikof, Anne; Armand, Marine; Robin, Cécile; Bazex, Hélène

    2014-01-01

    The quality of forensic mental health assessment has been a growing concern in various countries on both sides of the Atlantic, but the legal systems are not always comparable and some aspects of forensic assessment are specific to a given country. This paper describes the legal context of forensic psychological assessment in France (i.e. pre-trial investigation phase entrusted to a judge, with mental health assessment performed by preselected professionals called "experts" in French), its advantages and its pitfalls. Forensic psychiatric or psychological assessment is often an essential and decisive element in criminal cases, but since a judiciary scandal which was made public in 2005 (the Outreau case) there has been increasing criticism from the public and the legal profession regarding the reliability of clinical conclusions. Several academic studies and a parliamentary report have highlighted various faulty aspects in both the judiciary process and the mental health assessments. The heterogeneity of expert practices in France appears to be mainly related to a lack of consensus on several core notions such as mental health diagnosis or assessment methods, poor working conditions, lack of specialized training, and insufficient familiarity with the Code of Ethics. In this article we describe and analyze the French practice of forensic psychologists and psychiatrists in criminal cases and propose steps that could be taken to improve its quality, such as setting up specialized training courses, enforcing the Code of Ethics for psychologists, and calling for consensus on diagnostic and assessment methods. PMID:24631526

  20. Forensic mental health assessment in France: recommendations for quality improvement.

    PubMed

    Combalbert, Nicolas; Andronikof, Anne; Armand, Marine; Robin, Cécile; Bazex, Hélène

    2014-01-01

    The quality of forensic mental health assessment has been a growing concern in various countries on both sides of the Atlantic, but the legal systems are not always comparable and some aspects of forensic assessment are specific to a given country. This paper describes the legal context of forensic psychological assessment in France (i.e. pre-trial investigation phase entrusted to a judge, with mental health assessment performed by preselected professionals called "experts" in French), its advantages and its pitfalls. Forensic psychiatric or psychological assessment is often an essential and decisive element in criminal cases, but since a judiciary scandal which was made public in 2005 (the Outreau case) there has been increasing criticism from the public and the legal profession regarding the reliability of clinical conclusions. Several academic studies and a parliamentary report have highlighted various faulty aspects in both the judiciary process and the mental health assessments. The heterogeneity of expert practices in France appears to be mainly related to a lack of consensus on several core notions such as mental health diagnosis or assessment methods, poor working conditions, lack of specialized training, and insufficient familiarity with the Code of Ethics. In this article we describe and analyze the French practice of forensic psychologists and psychiatrists in criminal cases and propose steps that could be taken to improve its quality, such as setting up specialized training courses, enforcing the Code of Ethics for psychologists, and calling for consensus on diagnostic and assessment methods.

  1. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  2. Quality of Feedback Following Performance Assessments: Does Assessor Expertise Matter?

    ERIC Educational Resources Information Center

    Govaerts, Marjan J. B.; van de Wiel, Margje W. J.; van der Vleuten, Cees P. M.

    2013-01-01

    Purpose: This study aims to investigate quality of feedback as offered by supervisor-assessors with varying levels of assessor expertise following assessment of performance in residency training in a health care setting. It furthermore investigates if and how different levels of assessor expertise influence feedback characteristics.…

  3. Quality Control Charts in Large-Scale Assessment Programs

    ERIC Educational Resources Information Center

    Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying

    2011-01-01

    There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…

  4. Quality Management and Self Assessment Tools for Public Libraries.

    ERIC Educational Resources Information Center

    Evans, Margaret Kinnell

    This paper describes a two-year study by the British Library Research and Innovation Centre that examined the potential of self-assessment for public library services. The approaches that formed the basis for the investigation were the Business Excellence Model, the Quality Framework, and the Democratic Approach. Core values were identified by…

  5. Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence

    ERIC Educational Resources Information Center

    Bianchini, Stefano

    2014-01-01

    This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…

  6. Quality Assured Assessment Processes: Evaluating Staff Response to Change

    ERIC Educational Resources Information Center

    Malau-Aduli, Bunmi S.; Zimitat, Craig; Malau-Aduli, Aduli E. O.

    2011-01-01

    Medical education is not exempt from the increasing societal expectations of accountability and this is evidenced by an increasing number of litigation cases by students who are dissatisfied with their assessment. The time and monetary costs of student appeals makes it imperative that medical schools adopt robust quality assured assessment…

  7. Parameters of Higher School Internationalization and Quality Assessment

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2006-01-01

    The article presents the analysis of higher education internationalization, its conceptions and forms of manifestation. It investigates the ways and means of higher education internationalization, the diversity of higher school internationalization motives, the issues of higher education internationalization quality assessment, presenting an…

  8. Quantitative study designs used in quality improvement and assessment.

    PubMed

    Ormes, W S; Brim, M B; Coggan, P

    2001-01-01

    This article describes common quantitative design techniques that can be used to collect and analyze quality data. An understanding of the differences between these design techniques can help healthcare quality professionals make the most efficient use of their time, energies, and resources. To evaluate the advantages and disadvantages of these various study designs, it is necessary to assess factors that threaten the degree with which quality professionals may infer a cause-and-effect relationship from the data collected. Processes, the conduits of organizational function, often can be assessed by methods that do not take into account confounding and compromising circumstances that affect the outcomes of their analyses. An assumption that the implementation of process improvements may cause real change is incomplete without a consideration of other factors that might also have caused the same result. It is only through the identification, assessment, and exclusion of these alternative factors that administrators and healthcare quality professionals can assess the degree to which true process improvement or compliance has occurred. This article describes the advantages and disadvantages of common quantitative design techniques and reviews the corresponding threats to the interpretability of data obtained from their use. PMID:11378972

  9. Quality Assessment Parameters for Student Support at Higher Education Institutions

    ERIC Educational Resources Information Center

    Sajiene, Laima; Tamuliene, Rasa

    2012-01-01

    The research presented in this article aims to validate quality assessment parameters for student support at higher education institutions. Student support is discussed as the system of services provided by a higher education institution which helps to develop student-centred curriculum and fulfils students' emotional, academic, social needs, and…

  10. Incorporating Contaminant Bioavailability into Sediment Quality Assessment Frameworks

    EPA Science Inventory

    The recently adopted sediment quality assessment framework for evaluating bay and estuarine sediments in the State of California incorporates bulk sediment chemistry as a key line of evidence(LOE) but does not address the bioavailability of measured contaminants. Thus, the chemis...

  11. A research review of quality assessment for software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness.

  12. Assessing local resources and culture before instituting quality improvement projects.

    PubMed

    Hawkins, C Matthew

    2014-12-01

    The planning phases of quality improvement projects are commonly overlooked. Disorganized planning and implementation can escalate chaos, intensify resistance to change, and increase the likelihood of failure. Two important steps in the planning phase are (1) assessing local resources available to aid in the quality improvement project and (2) evaluating the culture in which the desired change is to be implemented. Assessing local resources includes identifying and engaging key stakeholders and evaluating if appropriate expertise is available for the scope of the project. This process also involves engaging informaticists and gathering available IT tools to plan and automate (to the extent possible) the data-gathering, analysis, and feedback steps. Culture in a department is influenced by the ability and willingness to manage resistance to change, build consensus, span boundaries between stakeholders, and become a learning organization. Allotting appropriate time to perform these preparatory steps will increase the odds of successfully performing a quality improvement project and implementing change. PMID:25467724

  13. Water Quality Assessment of Ayeyarwady River in Myanmar

    NASA Astrophysics Data System (ADS)

    Thatoe Nwe Win, Thanda; Bogaard, Thom; van de Giesen, Nick

    2015-04-01

    Myanmar's socio-economic activities, urbanisation, industrial operations and agricultural production have increased rapidly in recent years. With the increase of socio-economic development and climate change impacts, there is an increasing threat on quantity and quality of water resources. In Myanmar, some of the drinking water coverage still comes from unimproved sources including rivers. The Ayeyarwady River is the main river in Myanmar draining most of the country's area. The use of chemical fertilizer in the agriculture, the mining activities in the catchment area, wastewater effluents from the industries and communities and other development activities generate pollutants of different nature. Therefore water quality monitoring is of utmost importance. In Myanmar, there are many government organizations linked to water quality management. Each water organization monitors water quality for their own purposes. The monitoring is haphazard, short term and based on individual interest and the available equipment. The monitoring is not properly coordinated and a quality assurance programme is not incorporated in most of the work. As a result, comprehensive data on the water quality of rivers in Myanmar is not available. To provide basic information, action is needed at all management levels. The need for comprehensive and accurate assessments of trends in water quality has been recognized. For such an assessment, reliable monitoring data are essential. The objective of our work is to set-up a multi-objective surface water quality monitoring programme. The need for a scientifically designed network to monitor the Ayeyarwady river water quality is obvious as only limited and scattered data on water quality is available. However, the set-up should also take into account the current socio-economic situation and should be flexible to adjust after first years of monitoring. Additionally, a state-of-the-art baseline river water quality sampling program is required which

  14. Evaluating the Role of Content in Subjective Video Quality Assessment

    PubMed Central

    Vrgovic, Petar

    2014-01-01

    Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA. PMID:24523643

  15. System change: quality assessment and improvement for Medicaid managed care.

    PubMed

    Smith, W R; Cotter, J J; Rossiter, L F

    1996-01-01

    Rising Medicaid health expenditures have hastened the development of State managed care programs. Methods to monitor and improve health care under Medicaid are changing. Under fee-for-service (FFS), the primary concern was to avoid overutilization. Under managed care, it is to avoid underutilization. Quality enhancement thus moves from addressing inefficiency to addressing insufficiency of care. This article presents a case study of Virginia's redesign of Quality Assessment and Improvement (QA/I) for Medicaid, adapting the guidelines of the Quality Assurance Reform Initiative (QARI) of the Health Care Financing Administration (HCFA). The article concludes that redesigns should emphasize Continuous Quality Improvement (CQI) by all providers and of multi-faceted, population-based data.

  16. Reduced-reference image quality assessment using moment method

    NASA Astrophysics Data System (ADS)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  17. Health-related quality of life assessment in clinical practice.

    PubMed

    Meers, C; Singer, M A

    1996-01-01

    Assessment of biochemical responses to therapy is routine in the management of patients with end stage renal disease (ESRD). Assessment of health-related quality of life (HRQOL), however, is less common. Previous research indicates that HRQOL is a meaningful indicator that should be integrated into clinical practice. HRQOL is longitudinally evaluated in in-centre hemodialysis patients using the RAND 36-item Health Survey 1.0. Caregivers incorporate scores from this instrument into their assessment of patient functioning and well-being. HRQOL scores can be utilized to evaluate responses to changes in therapy, and to direct clinical decision-making, adding an important dimension to holistic, quality care for ESRD patients. PMID:8900807

  18. Methodological issues in the quantitative assessment of quality of life.

    PubMed

    Panagiotakos, Demosthenes B; Yfantopoulos, John N

    2011-10-01

    The term quality of life can be identified in Aristotle's classical writings of 330 BC. In his Nichomachian ethics he recognises the multiple relationships between happiness, well-being, "eudemonia" and quality of life. Historically the concept of quality of life has undergone various interpretations. It involves personal experience, perceptions and beliefs, attitudes concerning philosophical, cultural, spiritual, psychological, political, and financial aspects of everyday living. Quality of life has been extensively used both as an outcome and an explanatory factor in relation to human health, in various clinical trials, epidemiologic studies and health interview surveys. Because of the variations in the definition of quality of life, both in theory and in practice, there are also a wide range of procedures that are used to assess quality of life. In this paper several methodological issues regarding the tools used to evaluate quality of life is discussed. In summary, the use of components consisted of large number of classes, as well as the use of specific weights for each scale component, and the low-to-moderate inter-correlation level between the components, is evident from simulated and empirical studies.

  19. Mimics: a symbolic conflict/cooperation simulation program, with embedded protocol recording and automatic psychometric assessment.

    PubMed

    Aidman, Eugene V; Shmelyov, Alexander G

    2002-02-01

    This paper describes an interactive software environment designed as a social interaction simulator with embedded comprehensive recording and flexible assessment facilities. Using schematized visual sketches similar to cross-cultural facial universals (Ekman, 1999), Mimics (Shmelyov & Aidman, 1997) employs a computer-game-like scenario that requires the subject to identify with an avatar and navigate it through a playing field inhabited by hosts who display a range of facial expressions. From these expressions (which are highly consequential), the player has to anticipate the hosts' reactions to the avatar (which may vary from friendly to obstructing or aggressive) and choose between negotiating with a host (by altering the avatar's facial expression), attacking it, or searching for an escape route. Comprehensive recording of player moves and interactions has enabled computation of several finegrained indices of interactive behavior, such as aggressive response styles, efficiency, and motivation in conflict/cooperation contexts. Initial validation data and potential applications of the method in the assessment of personality and social behavior are discussed.

  20. Defining and assessing health-related quality of life.

    PubMed

    Holcík, J; Koupilová, I

    1999-11-01

    In recent years, there has been an increasing interest in quality of life assessment in clinical research and practice, as well as in public health and policy analysis. Indicators of health-related quality of life are important not only for health professionals and their patients, but also for health administrators and health economists in health care planning and policy making. Most studies on the outcome of treatments and interventions now include some kind of a quality of life measure. This usually takes a form of an assessment of symptoms and physical functioning, measurement of psychological well-being, life satisfaction, or coping and adjustment. Numerous scales of psychological health, physical health status and physical functioning have been developed for use in the assessment of health outcomes and a wide range of instruments for measurement of health-related quality of life is available. These fall into two broad categories of generic and disease-specific instruments. The selection of an instrument depends upon its measurement properties but also upon the specific context in which the instrument is going to be used. Adequate attention needs to be paid to the translation and validation of instruments for use across countries and cultural contexts.

  1. National Water-Quality Assessment Program: Central Arizona Basins

    USGS Publications Warehouse

    Cordy, Gail E.

    1994-01-01

    In 1991, the U.S. Geological Survey (USGS) began to implement a full-scale National Water-Quality Assessment (NAWQA) program. The long-term goals of the NAWQA program are to describe the status and trends in the quality of a large, representative part of the Nation's surface-water and ground-water resources and to provide a sound, scientific understanding of the primary natural and human factors affecting the quality of these resources. In meeting these goals, the program will produce a wealth of water-quality information that will be useful to policymakers and managers at the National, State, and local levels. Studies of 60 hydrologic systems that include parts of most major river basins and aquifer systems (study-unit investigations) are the building blocks of the national assessment. The 60 study units range in size from 1,000 to about 60,000 mi2 and represent 60 to 70 percent of the Nation's water use and population served by public water supplies. Twenty study-unit investigations were started in 1991, 20 additional studies started in 1994, and 20 more are planned to start in 1997. The Central Arizona Basins study unit began assessment activities in 1994.

  2. Quality assessment of systematic reviews on alveolar socket preservation.

    PubMed

    Moraschini, V; Barboza, E Dos S P

    2016-09-01

    The aim of this overview was to evaluate and compare the quality of systematic reviews, with or without meta-analysis, that have evaluated studies on techniques or biomaterials used for the preservation of alveolar sockets post tooth extraction in humans. An electronic search was conducted without date restrictions using the Medline/PubMed, Cochrane Library, and Web of Science databases up to April 2015. Eligibility criteria included systematic reviews, with or without meta-analysis, focused on the preservation of post-extraction alveolar sockets in humans. Two independent authors assessed the quality of the included reviews using AMSTAR and the checklist proposed by Glenny et al. in 2003. After the selection process, 12 systematic reviews were included. None of these reviews obtained the maximum score using the quality assessment tools implemented, and the results of the analyses were highly variable. A significant statistical correlation was observed between the scores of the two checklists. A wide structural and methodological variability was observed between the systematic reviews published on the preservation of alveolar sockets post tooth extraction. None of the reviews evaluated obtained the maximum score using the two quality assessment tools implemented.

  3. Assessing the Quality of Quality Assessment: The Inspection of Teaching and Learning in British Universities.

    ERIC Educational Resources Information Center

    Underwood, Simeon

    2000-01-01

    Characterizes Subject Review, a new scrutiny process for British higher education, evaluating its effectiveness against the purposes it has set itself in the area of funding policy, enhancement of provision, and public information. The paper offers a case study of factors which come into account when systems for measuring the quality of higher…

  4. National Water-Quality Assessment program: The Trinity River Basin

    USGS Publications Warehouse

    Land, Larry F.

    1991-01-01

    In 1991, the U.S. Geological Survey (USGS) began to implement a full-scale National Water-Quality Assessment (NAWQA) program. The long-term goals of the NAWQA program are to describe the status and trends in the quality of a large, representative part of the Nation's surface- and ground-water resources and to provide a sound, scientific understanding of the primary natural and human factors affecting the quality of these resources. In meeting these goals, the program will produce a wealth of water-quality information that will be useful to policy makers and managers at the national, State, and local levels. A major design feature of the NAWQA program will enable water-quality information at different areal scales to be integrated. A major component of the program is study-unit investigations, which comprise the principal building blocks of the program on which national-level assessment activities will be based. The 60 study-unit investigations that make up the program are hydrologic systems that include parts of most major river basins and aquifer systems. These study units cover areas of 1,200 to more than 65,000 square miles and incorporate about 60 to 70 percent of the Nation's water use and population served by public water supply. In 1991, the Trinity River basin study was among the first 20 NAWQA study units selected for study under the full-scale implementation plan.

  5. National Water-Quality Assessment Program: The Sacramento River Basin

    USGS Publications Warehouse

    Domagalski, Joseph L.; Brown, Larry R.

    1994-01-01

    In 1991, the U.S. Geological Survey (USGS) began to implement a full-scale National Water-Quality Assessment (NAWQA) program. The long-term goals of the NAWQA program are to describe the status of and trends in the quality of a large, representative part of the Nation's surface- and ground-water resources and to identify the major natural and human factors that affect the quality of those resources. In addressing these goals, the program will provide a wealth of water- quality information that will be useful to policy makers and managers at the national, State, and local levels. A major asset of the NAWQA program is that it will allow for the integration of water-quality information collected at several scales. A major component of the program is the study-unit investigation-the foundation of national- level assessment. The 60 study units of the NAWQA program are hydrologic systems that include parts of most major river basins and aquifer systems of the conterminous United States. These study units cover areas of 1,000 to more than 60,000 square miles and represent 60 to 70 percent of the Nation's water use and population served by public water supplies. Investigations of the first 20 study units began in 1991. In 1994, the Sacramento River Basin was among the second set of 20 NAWQA study units selected for investigation.

  6. Web Service for Positional Quality Assessment: the Wps Tier

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2015-08-01

    In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

  7. Overview of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Leahy, P.P.; Thompson, T.H.

    1994-01-01

    The Nation's water resources are the basis for life and our economic vitality. These resources support a complex web of human activities and fishery and wildlife needs that depend upon clean water. Demands for good-quality water for drinking, recreation, farming, and industry are rising, and as a result, the American public is concerned about the condition and sustainability of our water resources. The American public is asking: Is it safe to swim in and drink water from our rivers or lakes? Can we eat the fish that come from them? Is our ground water polluted? Is water quality degrading with time, and if so, why? Has all the money we've spent to clean up our waters, done any good? The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was designed to provide information that will help answer these questions. NAWQA is designed to assess historical, current, and future water-quality conditions in representative river basins and aquifers nationwide. One of the primary objectives of the program is to describe relations between natural factors, human activities, and water-quality conditions and to define those factors that most affect water quality in different parts of the Nation. The linkage of water quality to environmental processes is of fundamental importance to water-resource managers, planners, and policy makers. It provides a strong and unbiased basis for better decisionmaking by those responsible for making decisions that affect our water resources, including the United States Congress, Federal, State, and local agencies, environmental groups, and industry. Information from the NAWQA Program also will be useful for guiding research, monitoring, and regulatory activities in cost effective ways.

  8. Quality assessment of altimeter data through tide gauge comparisons

    NASA Astrophysics Data System (ADS)

    Prandi, Pierre; Valladeau, Guillaume; Ablain, Michael; Picot, Nicolas; Desjonquères, Jean-Damien

    2015-04-01

    Since the first altimeter missions and the improvements performed in the accuracy of sea surface height measurements from 1992 onwards, the importance of global quality assessment of altimeter data has been increasing. Global CalVal studies usually assess this performance by the analysis of internal consistency and cross-comparison between all missions. The overall quality assessment of altimeter data can be performed by analyzing their internal consistency and the cross-comparison between all missions. As a complementary approach, tide gauge measurements are used as an external and independent reference to enable further quality assessment of the altimeter sea level and provide a better estimate of the multiple altimeter performances. In this way, both altimeter and tide gauge observations, dedicated to climate applications, require a rigorous quality control. The tide gauge time series considered in this study derive from several networks (GLOSS/CLIVAR, PSMSL, REFMAR) and provide sea-level heights with a physical content comparable with altimetry sea level estimates. Concerning altimeter data, the long-term drift assessment can be evaluated thanks to a widespread network of tide gauges. Thus, in-situ measurements are compared with altimeter sea level for the main altimeter missions. If altimeter time series are long enough, tide gauge data provide a relevant estimation of the global Mean Sea Level (MSL) drift calculated for all the missions. Moreover, comparisons with sea level products merging all the altimeter missions together have also been performed using several datasets, among which the AVISO delayed-time Sea Level Anomaly grids.

  9. Assessment of features for automatic CTG analysis based on expert annotation.

    PubMed

    Chudácek, Vacláv; Spilka, Jirí; Lhotská, Lenka; Janku, Petr; Koucký, Michal; Huptych, Michal; Bursa, Miroslav

    2011-01-01

    Cardiotocography (CTG) is the monitoring of fetal heart rate (FHR) and uterine contractions (TOCO) since 1960's used routinely by obstetricians to detect fetal hypoxia. The evaluation of the FHR in clinical settings is based on an evaluation of macroscopic morphological features and so far has managed to avoid adopting any achievements from the HRV research field. In this work, most of the ever-used features utilized for FHR characterization, including FIGO, HRV, nonlinear, wavelet, and time and frequency domain features, are investigated and the features are assessed based on their statistical significance in the task of distinguishing the FHR into three FIGO classes. Annotation derived from the panel of experts instead of the commonly utilized pH values was used for evaluation of the features on a large data set (552 records). We conclude the paper by presenting the best uncorrelated features and their individual rank of importance according to the meta-analysis of three different ranking methods. Number of acceleration and deceleration, interval index, as well as Lempel-Ziv complexity and Higuchi's fractal dimension are among the top five features. PMID:22255719

  10. Data Quality Verification at STScI - Automated Assessment and Your Data

    NASA Astrophysics Data System (ADS)

    Dempsey, R.; Swade, D.; Scott, J.; Hamilton, F.; Holm, A.

    1996-12-01

    As satellite based observatories improve their ability to deliver wider varieties and more complex types of scientific data, so to does the process of analyzing and reducing these data. It becomes correspondingly imperative that Guest Observers or Archival Researchers have access to an accurate, consistent, and easily understandable summary of the quality of their data. Previously, at the STScI, an astronomer would display and examine the quality and scientific usefulness of every single observation obtained with HST. Recently, this process has undergone a major reorganization at the Institute. A major part of the new process is that the majority of data are assessed automatically with little or no human intervention. As part of routine processing in the OSS--PODPS Unified System (OPUS), the Observatory Monitoring System (OMS) observation logs, the science processing trailer file (also known as the TRL file), and the science data headers are inspected by an automated tool, AUTO_DQ. AUTO_DQ then determines if any anomalous events occurred during the observation or through processing and calibration of the data that affects the procedural quality of the data. The results are placed directly into the Procedural Data Quality (PDQ) file as a string of predefined data quality keywords and comments. These in turn are used by the Contact Scientist (CS) to check the scientific usefulness of the observations. In this manner, the telemetry stream is checked for known problems such as losses of lock, re-centerings, or degraded guiding, for example, while missing data or calibration errors are also easily flagged. If the problem is serious, the data are then queued for manual inspection by an astronomer. The success of every target acquisition is verified manually. If serious failures are confirmed, the PI and the scheduling staff are notified so that options concerning rescheduling the observations can be explored.

  11. iDensity: an automatic Gabor filter-based algorithm for breast density assessment

    NASA Astrophysics Data System (ADS)

    Gamdonkar, Ziba; Tay, Kevin; Ryder, Will; Brennan, Patrick C.; Mello-Thoms, Claudia

    2015-03-01

    Abstract Although many semi-automated and automated algorithms for breast density assessment have been recently proposed, none of these have been widely accepted. In this study a novel automated algorithm, named iDensity, inspired by the human visual system is proposed for classifying mammograms into four breast density categories corresponding to the Breast Imaging Reporting and Data System (BI-RADS). For each BI-RADS category 80 cases were taken from the normal volumes of the Digital Database for Screening Mammography (DDSM). For each case only the left medio-lateral oblique was utilized. After image calibration using the provided tables of each scanner in the DDSM, the pectoral muscle and background were removed. Images were filtered by a median filter and down sampled. Images were then filtered by a filter bank consisting of Gabor filters in six orientations and 3 scales, as well as a Gaussian filter. Three gray level histogram-based features and three second order statistics features were extracted from each filtered image. Using the extracted features, mammograms were separated initially separated into two groups, low or high density, then in a second stage, the low density group was subdivided into BI-RADS I or II, and the high density group into BI-RADS III or IV. The algorithm achieved a sensitivity of 95% and specificity of 94% in the first stage, sensitivity of 89% and specificity of 95% when classifying BIRADS I and II cases, and a sensitivity of 88% and 91% specificity when classifying BI-RADS III and IV.

  12. Assessment of quality of life in patients with knee osteoarthritis

    PubMed Central

    Kawano, Marcio Massao; Araújo, Ivan Luis Andrade; Castro, Martha Cavalcante; Matos, Marcos Almeida

    2015-01-01

    ABSTRACT OBJECTIVE : To assess the quality of life of knee osteoarthritis patients using the SF-36 questionnaire METHODS : Cross-sec-tional study with 93 knee osteoarthritis patients. The sample was categorized according to Ahlbӓck score. All individuals were interviewed with the SF-36 questionnaire RESULTS : The main finding of the study is related to the association of edu-cation level with the functional capacity, functional limitation and pain. Patients with higher education level had better functional capacity when they were compared to patients with basic level of education CONCLUSION : Individuals with osteoarthritis have a low perception of their quality of life in functional capacity, functional limitation and pain. There is a strong association between low level of education and low perception of quality of life. Level of Evidence IV, Clinical Case Series. PMID:27057143

  13. Assessing the Quality of PhD Dissertations. A Survey of External Committee Members

    ERIC Educational Resources Information Center

    Kyvik, Svein; Thune, Taran

    2015-01-01

    This article reports on a study of the quality assessment of doctoral dissertations, and asks whether examiner characteristics influence assessment of research quality in PhD dissertations. Utilising a multi-dimensional concept of quality of PhD dissertations, we look at differences in assessment of research quality, and particularly test whether…

  14. National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox

    USGS Publications Warehouse

    Price, Curtis

    2010-01-01

    This is release 1.0 of the National Water-Quality Assessment (NAWQA) Area-Characterization Toolbox. These tools are designed to be accessed using ArcGIS Desktop software (versions 9.3 and 9.3.1). The toolbox is composed of a collection of custom tools that implement geographic information system (GIS) techniques used by the NAWQA Program to characterize aquifer areas, drainage basins, and sampled wells.

  15. Automatic Spiral Analysis for Objective Assessment of Motor Symptoms in Parkinson’s Disease

    PubMed Central

    Memedi, Mevludin; Sadikov, Aleksander; Groznik, Vida; Žabkar, Jure; Možina, Martin; Bergquist, Filip; Johansson, Anders; Haubenberger, Dietrich; Nyholm, Dag

    2015-01-01

    A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good

  16. Automatic Spiral Analysis for Objective Assessment of Motor Symptoms in Parkinson's Disease.

    PubMed

    Memedi, Mevludin; Sadikov, Aleksander; Groznik, Vida; Žabkar, Jure; Možina, Martin; Bergquist, Filip; Johansson, Anders; Haubenberger, Dietrich; Nyholm, Dag

    2015-09-17

    A challenge for the clinical management of advanced Parkinson's disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good

  17. Quality control and quality assurance plan for bridge channel-stability assessments in Massachusetts

    USGS Publications Warehouse

    Parker, Gene W.; Pinson, Harlow

    1993-01-01

    A quality control and quality assurance plan has been implemented as part of the Massachusetts bridge scour and channel-stability assessment program. This program is being conducted by the U.S. Geological Survey, Massachusetts-Rhode Island District, in cooperation with the Massachusetts Highway Department. Project personnel training, data-integrity verification, and new data-management technologies are being utilized in the channel-stability assessment process to improve current data-collection and management techniques. An automated data-collection procedure has been implemented to standardize channel-stability assessments on a regular basis within the State. An object-oriented data structure and new image management tools are used to produce a data base enabling management of multiple data object classes. Data will be reviewed by assessors and data base managers before being merged into a master bridge-scour data base, which includes automated data-verification routines.

  18. Assessment of ecological quality of coastal lagoons with a combination of phytobenthic and water quality indices.

    PubMed

    Christia, Chrysoula; Giordani, Gianmarco; Papastergiadou, Eva

    2014-09-15

    Coastal lagoons are ecotones between continents and the sea. Coastal lagoons of Western Greece, subjected to different human pressures, were classified into four different types based on their hydromorphological characteristics and monitored over a three year period for their biotic and abiotic features. Six ecological indices based on water quality parameters (TSI-Chl-a, TSI-TP, TRIX), benthic macrophytes (E-MaQI, EEI-c) and an integrated index TWQI, were applied to assess the ecological status of studied lagoons under real conditions. The trophic status ranged from oligotrophic to hypertrophic according to the index applied. The ecological quality of transitional water ecosystems can be better assessed by using indices based on benthic macrophytes as changes in abundance and diversity of sensitive and tolerant species are the first evidence of incoming eutrophication. The multi-parametric index TWQI can be considered appropriate for the ecological assessment of these ecosystems due to its robustness and the simple application procedure.

  19. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  20. Automatic exposure control in multichannel CT with tube current modulation to achieve a constant level of image noise: Experimental assessment on pediatric phantoms

    SciTech Connect

    Brisse, Herve J.; Madec, Ludovic; Gaboriaud, Genevieve; Lemoine, Thomas; Savignoni, Alexia; Neuenschwander, Sylvia; Aubert, Bernard; Rosenwald, Jean-Claude

    2007-07-15

    Automatic exposure control (AEC) systems have been developed by computed tomography (CT) manufacturers to improve the consistency of image quality among patients and to control the absorbed dose. Since a multichannel helical CT scan may easily increase individual radiation doses, this technical improvement is of special interest in children who are particularly sensitive to ionizing radiation, but little information is currently available regarding the precise performance of these systems on small patients. Our objective was to assess an AEC system on pediatric dose phantoms by studying the impact of phantom transmission and acquisition parameters on tube current modulation, on the resulting absorbed dose and on image quality. We used a four-channel CT scan working with a patient-size and z-axis-based AEC system designed to achieve a constant noise within the reconstructed images by automatically adjusting the tube current during acquisition. The study was performed with six cylindrical poly(methylmethacrylate) (PMMA) phantoms of variable diameters (10-32 cm) and one 5 years of age equivalent pediatric anthropomorphic phantom. After a single scan projection radiograph (SPR), helical acquisitions were performed and images were reconstructed with a standard convolution kernel. Tube current modulation was studied with variable SPR settings (tube angle, mA, kVp) and helical parameters (6-20 HU noise indices, 80-140 kVp tube potential, 0.8-4 s. tube rotation time, 5-20 mm x-ray beam thickness, 0.75-1.5 pitch, 1.25-10 mm image thickness, variable acquisition, and reconstruction fields of view). CT dose indices (CTDIvol) were measured, and the image quality criterion used was the standard deviation of the CT number measured in reconstructed images of PMMA material. Observed tube current levels were compared to the expected values from Brooks and Di Chiro's [R.A. Brooks and G.D. Chiro, Med. Phys. 3, 237-240 (1976)] model and calculated values (product of a reference value

  1. Subjective video quality assessment methods for recognition tasks

    NASA Astrophysics Data System (ADS)

    Ford, Carolyn G.; McFarland, Mark A.; Stange, Irena W.

    2009-02-01

    To develop accurate objective measurements (models) for video quality assessment, subjective data is traditionally collected via human subject testing. The ITU has a series of Recommendations that address methodology for performing subjective tests in a rigorous manner. These methods are targeted at the entertainment application of video. However, video is often used for many applications outside of the entertainment sector, and generally this class of video is used to perform a specific task. Examples of these applications include security, public safety, remote command and control, and sign language. For these applications, video is used to recognize objects, people or events. The existing methods, developed to assess a person's perceptual opinion of quality, are not appropriate for task-based video. The Institute for Telecommunication Sciences, under a program from the Department of Homeland Security and the National Institute for Standards and Technology's Office of Law Enforcement, has developed a subjective test method to determine a person's ability to perform recognition tasks using video, thereby rating the quality according to the usefulness of the video quality within its application. This new method is presented, along with a discussion of two examples of subjective tests using this method.

  2. A novel, fuzzy-based air quality index (FAQI) for air quality assessment

    NASA Astrophysics Data System (ADS)

    Sowlat, Mohammad Hossein; Gharibi, Hamed; Yunesian, Masud; Tayefeh Mahmoudi, Maryam; Lotfi, Saeedeh

    2011-04-01

    The ever increasing level of air pollution in most areas of the world has led to development of a variety of air quality indices for estimation of health effects of air pollution, though the indices have their own limitations such as high levels of subjectivity. Present study, therefore, aimed at developing a novel, fuzzy-based air quality index (FAQI ) to handle such limitations. The index developed by present study is based on fuzzy logic that is considered as one of the most common computational methods of artificial intelligence. In addition to criteria air pollutants (i.e. CO, SO 2, PM 10, O 3, NO 2), benzene, toluene, ethylbenzene, xylene, and 1,3-butadiene were also taken into account in the index proposed, because of their considerable health effects. Different weighting factors were then assigned to each pollutant according to its priority. Trapezoidal membership functions were employed for classifications and the final index consisted of 72 inference rules. To assess the performance of the index, a case study was carried out employing air quality data at five different sampling stations in Tehran, Iran, from January 2008 to December 2009, results of which were then compared to the results obtained from USEPA air quality index (AQI). According to the results from present study, fuzzy-based air quality index is a comprehensive tool for classification of air quality and tends to produce accurate results. Therefore, it can be considered useful, reliable, and suitable for consideration by local authorities in air quality assessment and management schemes. Fuzzy-based air quality index (FAQI).

  3. A cloud model-based approach for water quality assessment.

    PubMed

    Wang, Dong; Liu, Dengfeng; Ding, Hao; Singh, Vijay P; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun

    2016-07-01

    Water quality assessment entails essentially a multi-criteria decision-making process accounting for qualitative and quantitative uncertainties and their transformation. Considering uncertainties of randomness and fuzziness in water quality evaluation, a cloud model-based assessment approach is proposed. The cognitive cloud model, derived from information science, can realize the transformation between qualitative concept and quantitative data, based on probability and statistics and fuzzy set theory. When applying the cloud model to practical assessment, three technical issues are considered before the development of a complete cloud model-based approach: (1) bilateral boundary formula with nonlinear boundary regression for parameter estimation, (2) hybrid entropy-analytic hierarchy process technique for calculation of weights, and (3) mean of repeated simulations for determining the degree of final certainty. The cloud model-based approach is tested by evaluating the eutrophication status of 12 typical lakes and reservoirs in China and comparing with other four methods, which are Scoring Index method, Variable Fuzzy Sets method, Hybrid Fuzzy and Optimal model, and Neural Networks method. The proposed approach yields information concerning membership for each water quality status which leads to the final status. The approach is found to be representative of other alternative methods and accurate. PMID:26995351

  4. A cloud model-based approach for water quality assessment.

    PubMed

    Wang, Dong; Liu, Dengfeng; Ding, Hao; Singh, Vijay P; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun

    2016-07-01

    Water quality assessment entails essentially a multi-criteria decision-making process accounting for qualitative and quantitative uncertainties and their transformation. Considering uncertainties of randomness and fuzziness in water quality evaluation, a cloud model-based assessment approach is proposed. The cognitive cloud model, derived from information science, can realize the transformation between qualitative concept and quantitative data, based on probability and statistics and fuzzy set theory. When applying the cloud model to practical assessment, three technical issues are considered before the development of a complete cloud model-based approach: (1) bilateral boundary formula with nonlinear boundary regression for parameter estimation, (2) hybrid entropy-analytic hierarchy process technique for calculation of weights, and (3) mean of repeated simulations for determining the degree of final certainty. The cloud model-based approach is tested by evaluating the eutrophication status of 12 typical lakes and reservoirs in China and comparing with other four methods, which are Scoring Index method, Variable Fuzzy Sets method, Hybrid Fuzzy and Optimal model, and Neural Networks method. The proposed approach yields information concerning membership for each water quality status which leads to the final status. The approach is found to be representative of other alternative methods and accurate.

  5. Integrating transcriptomics into triad-based soil-quality assessment.

    PubMed

    Chen, Guangquan; de Boer, Tjalf E; Wagelmans, Marlea; van Gestel, Cornelis A M; van Straalen, Nico M; Roelofs, Dick

    2014-04-01

    The present study examined how transcriptomics tools can be included in a triad-based soil-quality assessment to assess the toxicity of soils from riverbanks polluted by metals. To that end, the authors measured chemical soil properties and used the International Organization for Standardization guideline for ecotoxicological tests and a newly developed microarray for gene expression in the indicator soil arthropod Folsomia candida. Microarray analysis revealed that the oxidative stress response pathway was significantly affected in all soils except one. The data indicate that changes in cell redox homeostasis are a significant signature of metal stress. Finally, 32 genes showed significant dose-dependent expression with metal concentrations. They are promising genetic markers providing an early indication of the need for higher-tier testing of soil quality. During the bioassay, the toxicity of the least polluted soils could be removed by sterilization. The gene expression profile for this soil did not show a metal-related signature, confirming that a factor other than metals (most likely of biological origin) caused the toxicity. The present study demonstrates the feasibility and advantages of integrating transcriptomics into triad-based soil-quality assessment. Combining molecular and organismal life-history trait stress responses helps to identify causes of adverse effects in bioassays. Further validation is needed for verifying the set of genes with dose-dependent expression patterns linked with toxic stress. PMID:24382659

  6. National water-quality assessment program : the Albemarle- Pamlico drainage

    USGS Publications Warehouse

    Lloyd, O.B.; Barnes, C.R.; Woodside, M.D.

    1991-01-01

    In 1991, the U.S. Geological Survey (USGS) began to implement a full-scale National Water-Quality Assessment (NAWQA) program. Long-term goals of the NAWQA program are to describe the status and trends in the quality of a large, representative part of the Nation's surface- and ground-water resources and to provide a sound, scientific understanding of the primary natural and human factors affecting the quality of these resources. In meeting these goals, the program will produce a wealth of water quality information that will be useful to policy makers and managers at the national, State, and local levels. Study-unit investigations constitute a major component of the NAWQA program, forming the principal building blocks on which national-level assessment activities are based. The 60 study-unit investigations that make up the program are hydrologic systems that include parts of most major river basins and aquifer systems. These study units cover areas of 1,200 to more than 65,000 square miles and incorporate about 60 to 70 percent of the Nation's water use and population served by public water supply. In 1991, the Albemarle-Pamlico drainage was among the first 20 NAWQA study units selected for study under the full-scale implementation plan. The Albemarle-Pamlico drainage study will examine the physical, chemical, and biological aspects of water quality issues in a coordinated investigation of surface water and ground water in the Albemarle-Pamlico drainage basin. The quantity and quality of discharge from the Albemarle-Pamlico drainage basin contribute to some water quality problems in the biologically sensitive waters of Albemarle and Pamlico Sounds. A retrospective analysis of existing water quality data will precede a 3-year period of intensive data-collection and analysis activities. The data resulting from this study and the improved understanding of important processes and issues in the upstream part of the study unit will enhance understanding of the quality of

  7. Assessing water quality trends in catchments with contrasting hydrological regimes

    NASA Astrophysics Data System (ADS)

    Sherriff, Sophie C.; Shore, Mairead; Mellander, Per-Erik

    2016-04-01

    Environmental resources are under increasing pressure to simultaneously achieve social, economic and ecological aims. Increasing demand for food production, for example, has expanded and intensified agricultural systems globally. In turn, greater risks of diffuse pollutant delivery (suspended sediment (SS) and Phosphorus (P)) from land to water due to higher stocking densities, fertilisation rates and soil erodibility has been attributed to deterioration of chemical and ecological quality of aquatic ecosystems. Development of sustainable and resilient management strategies for agro-ecosystems must detect and consider the impact of land use disturbance on water quality over time. However, assessment of multiple monitoring sites over a region is challenged by hydro-climatic fluctuations and the propagation of events through catchments with contrasting hydrological regimes. Simple water quality metrics, for example, flow-weighted pollutant exports have potential to normalise the impact of catchment hydrology and better identify water quality fluctuations due to land use and short-term climate fluctuations. This paper assesses the utility of flow-weighted water quality metrics to evaluate periods and causes of critical pollutant transfer. Sub-hourly water quality (SS and P) and discharge data were collected from hydrometric monitoring stations at the outlets of five small (~10 km2) agricultural catchments in Ireland. Catchments possess contrasting land uses (predominantly grassland or arable) and soil drainage (poorly, moderately or well drained) characteristics. Flow-weighted water quality metrics were calculated and evaluated according to fluctuations in source pressure and rainfall. Flow-weighted water quality metrics successfully identified fluctuations in pollutant export which could be attributed to land use changes through the agricultural calendar, i.e., groundcover fluctuations. In particular, catchments with predominantly poor or moderate soil drainage

  8. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  9. The Groundwater Performance Assessment Project Quality Assurance Plan

    SciTech Connect

    Luttrell, Stuart P.

    2006-05-11

    U.S. Department of Energy (DOE) has monitored groundwater on the Hanford Site since the 1940s to help determine what chemical and radiological contaminants have made their way into the groundwater. As regulatory requirements for monitoring increased in the 1980s, there began to be some overlap between various programs. DOE established the Groundwater Performance Assessment Project (groundwater project) in 1996 to ensure protection of the public and the environment while improving the efficiency of monitoring activities. The groundwater project is designed to support all groundwater monitoring needs at the site, eliminate redundant sampling and analysis, and establish a cost-effective hierarchy for groundwater monitoring activities. This document provides the quality assurance guidelines that will be followed by the groundwater project. This QA Plan is based on the QA requirements of DOE Order 414.1C, Quality Assurance, and 10 CFR 830, Subpart A--General Provisions/Quality Assurance Requirements as delineated in Pacific Northwest National Laboratory’s Standards-Based Management System. In addition, the groundwater project is subject to the Environmental Protection Agency (EPA) Requirements for Quality Assurance Project Plans (EPA/240/B-01/003, QA/R-5). The groundwater project has determined that the Hanford Analytical Services Quality Assurance Requirements Documents (HASQARD, DOE/RL-96-68) apply to portions of this project and to the subcontractors. HASQARD requirements are discussed within applicable sections of this plan.

  10. Quality assessment of adaptive 3D video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Gutiérrez, Jesús; García, Narciso

    2013-03-01

    The streaming of 3D video contents is currently a reality to expand the user experience. However, because of the variable bandwidth of the networks used to deliver multimedia content, a smooth and high-quality playback experience could not always be guaranteed. Using segments in multiple video qualities, HTTP adaptive streaming (HAS) of video content is a relevant advancement with respect to classic progressive download streaming. Mainly, it allows resolving these issues by offering significant advantages in terms of both user-perceived Quality of Experience (QoE) and resource utilization for content and network service providers. In this paper we discuss the impact of possible HAS client's behavior while adapting to the network capacity on enduser. This has been done through an experiment of testing the end-user response to the quality variation during the adaptation procedure. The evaluation has been carried out through a subjective test of the end-user response to various possible clients' behaviors for increasing, decreasing, and oscillation of quality in 3D video. In addition, some of the HAS typical impairments during the adaptation has been simulated and their effects on the end-user perception are assessed. The experimental conclusions have made good insight into the user's response to different adaptation scenarios and visual impairments causing the visual discomfort that can be used to develop the adaptive streaming algorithm to improve the end-user experience.

  11. Quality Assessment of Collection 6 MODIS Atmospheric Science Products

    NASA Astrophysics Data System (ADS)

    Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.

    2015-12-01

    Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the

  12. Soil bioassays as tools for sludge compost quality assessment

    SciTech Connect

    Domene, Xavier; Sola, Laura; Ramirez, Wilson; Alcaniz, Josep M.; Andres, Pilar

    2011-03-15

    Composting is a waste management technology that is becoming more widespread as a response to the increasing production of sewage sludge and the pressure for its reuse in soil. In this study, different bioassays (plant germination, earthworm survival, biomass and reproduction, and collembolan survival and reproduction) were assessed for their usefulness in the compost quality assessment. Compost samples, from two different composting plants, were taken along the composting process, which were characterized and submitted to bioassays (plant germination and collembolan and earthworm performance). Results from our study indicate that the noxious effects of some of the compost samples observed in bioassays are related to the low organic matter stability of composts and the enhanced release of decomposition endproducts, with the exception of earthworms, which are favored. Plant germination and collembolan reproduction inhibition was generally associated with uncomposted sludge, while earthworm total biomass and reproduction were enhanced by these materials. On the other hand, earthworm and collembolan survival were unaffected by the degree of composting of the wastes. However, this pattern was clear in one of the composting procedures assessed, but less in the other, where the release of decomposition endproducts was lower due to its higher stability, indicating the sensitivity and usefulness of bioassays for the quality assessment of composts.

  13. Assess water scarcity integrating water quantity and quality

    NASA Astrophysics Data System (ADS)

    Liu, J.; Zeng, Z.

    2014-12-01

    Water scarcity has become widespread all over the world. Current methods for water scarcity assessment are mainly based on water quantity and seldom consider water quality. Here, we develop an approach for assessing water scarcity considering both water quantity and quality. In this approach, a new water scarcity index is used to describe the severity of water scarcity in the form of a water scarcity meter, which may help to communicate water scarcity to a wider audience. To illustrate the approach, we analyzed the historical trend of water scarcity for Beijing city in China during 1995-2009, as well as the assessment for different river basins in China. The results show that Beijing made a huge progress in mitigating water scarcity, and that from 1999 to 2009 the blue and grey water scarcity index decreased by 59% and 62%, respectively. Despite this progress, we demonstrate that Beijing is still characterized by serious water scarcity due to both water quantity and quality. The water scarcity index remained at a high value of 3.5 with a blue and grey water scarcity index of 1.2 and 2.3 in 2009 (exceeding the thresholds of 0.4 and 1, respectively). As a result of unsustainable water use and pollution, groundwater levels continue to decline, and water quality shows a continuously deteriorating trend. To curb this trend, future water policies should further decrease water withdrawal from local sources (in particular groundwater) within Beijing, and should limit the grey water footprint below the total amount of water resources.

  14. A review of image quality assessment methods with application to computational photography

    NASA Astrophysics Data System (ADS)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  15. Quality of evidence must guide risk assessment of asbestos.

    PubMed

    Lenters, Virissa; Burdorf, Alex; Vermeulen, Roel; Stayner, Leslie; Heederik, Dick

    2012-10-01

    In 2011, we reported on the sensitivity of lung cancer potency estimates for asbestos to the quality of the exposure assessment component of underlying evidence. Both this meta-analysis and a separate reassessment of standards published by the Health Council of the Netherlands (Gezondheidsraad) have been commented on by Berman and Case. A criticism is that we used a truncated data set. We incrementally excluded poorer-quality studies to evaluate trends in meta-analyzed lung cancer potency estimates (meta-K (L) values). This was one of three analysis approaches we presented. The other two used the full set of studies: a meta-analysis stratified by covariates and dichotomized by poorer and better exposure assessment aspects; and a meta-regression modeling both asbestos fiber type and these covariates. They also state that our results are not robust to removal of one study. We disagree with this claim and present additional sensitivity analyses underpinning our earlier conclusion that inclusion of studies with higher-quality asbestos-exposure assessment yield higher meta-estimates of the lung cancer risk per unit of exposure. We reiterate that potency differences for predominantly chrysotile- versus amphibole-asbestos-exposed cohorts are difficult to ascertain when meta-analyses are restricted to studies with fewer exposure assessment limitations. We strongly argue that the existence of any uncertainty related to potency issues should not hamper the development of appropriate evidence-based guidelines and stringent policies in order to protect the public from hazardous environmental and occupational exposures.

  16. Meat quality assessment by electronic nose (machine olfaction technology).

    PubMed

    Ghasemi-Varnamkhasti, Mahdi; Mohtasebi, Seyed Saeid; Siadat, Maryam; Balasubramanian, Sundar

    2009-01-01

    Over the last twenty years, newly developed chemical sensor systems (so called "electronic noses") have made odor analyses possible. These systems involve various types of electronic chemical gas sensors with partial specificity, as well as suitable statistical methods enabling the recognition of complex odors. As commercial instruments have become available, a substantial increase in research into the application of electronic noses in the evaluation of volatile compounds in food, cosmetic and other items of everyday life is observed. At present, the commercial gas sensor technologies comprise metal oxide semiconductors, metal oxide semiconductor field effect transistors, organic conducting polymers, and piezoelectric crystal sensors. Further sensors based on fibreoptic, electrochemical and bi-metal principles are still in the developmental stage. Statistical analysis techniques range from simple graphical evaluation to multivariate analysis such as artificial neural network and radial basis function. The introduction of electronic noses into the area of food is envisaged for quality control, process monitoring, freshness evaluation, shelf-life investigation and authenticity assessment. Considerable work has already been carried out on meat, grains, coffee, mushrooms, cheese, sugar, fish, beer and other beverages, as well as on the odor quality evaluation of food packaging material. This paper describes the applications of these systems for meat quality assessment, where fast detection methods are essential for appropriate product management. The results suggest the possibility of using this new technology in meat handling.

  17. Subjective Quality Assessment of Underwater Video for Scientific Applications.

    PubMed

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-01-01

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400

  18. Subjective Quality Assessment of Underwater Video for Scientific Applications.

    PubMed

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-12-15

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions.

  19. Meat Quality Assessment by Electronic Nose (Machine Olfaction Technology)

    PubMed Central

    Ghasemi-Varnamkhasti, Mahdi; Mohtasebi, Seyed Saeid; Siadat, Maryam; Balasubramanian, Sundar

    2009-01-01

    Over the last twenty years, newly developed chemical sensor systems (so called “electronic noses”) have made odor analyses possible. These systems involve various types of electronic chemical gas sensors with partial specificity, as well as suitable statistical methods enabling the recognition of complex odors. As commercial instruments have become available, a substantial increase in research into the application of electronic noses in the evaluation of volatile compounds in food, cosmetic and other items of everyday life is observed. At present, the commercial gas sensor technologies comprise metal oxide semiconductors, metal oxide semiconductor field effect transistors, organic conducting polymers, and piezoelectric crystal sensors. Further sensors based on fibreoptic, electrochemical and bi-metal principles are still in the developmental stage. Statistical analysis techniques range from simple graphical evaluation to multivariate analysis such as artificial neural network and radial basis function. The introduction of electronic noses into the area of food is envisaged for quality control, process monitoring, freshness evaluation, shelf-life investigation and authenticity assessment. Considerable work has already been carried out on meat, grains, coffee, mushrooms, cheese, sugar, fish, beer and other beverages, as well as on the odor quality evaluation of food packaging material. This paper describes the applications of these systems for meat quality assessment, where fast detection methods are essential for appropriate product management. The results suggest the possibility of using this new technology in meat handling. PMID:22454572

  20. Terrestrial Method for Airborne Lidar Quality Control and Assessment

    NASA Astrophysics Data System (ADS)

    Alsubaie, N. M.; Badawy, H. M.; Elhabiby, M. M.; El-Sheimy, N.

    2014-11-01

    Most of LiDAR systems do not provide the end user with the calibration and acquisition procedures that can use to validate the quality of the data acquired by the airborne system. Therefore, this system needs data Quality Control (QC) and assessment procedures to verify the accuracy of the laser footprints and mainly at building edges. This research paper introduces an efficient method for validating the quality of the airborne LiDAR point clouds data using terrestrial laser scanning data integrated with edge detection techniques. This method will be based on detecting the edge of buildings from these two independent systems. Hence, the building edges are extracted from the airborne data using an algorithm that is based on the standard deviation of neighbour point's height from certain threshold with respect to centre points using radius threshold. The algorithm is adaptive to different point densities. The approach is combined with another innovative edge detection technique from terrestrial laser scanning point clouds that is based on the height and point density constraints. Finally, statistical analysis and assessment will be applied to compare these two systems in term of edge detection extraction precision, which will be a priori step for 3D city modelling generated from heterogeneous LiDAR systems

  1. An assessment of drinking-water quality post-Haiyan

    PubMed Central

    Anarna, Maria Sonabel; Fernando, Arturo

    2015-01-01

    Introduction Access to safe drinking-water is one of the most important public health concerns in an emergency setting. This descriptive study reports on an assessment of water quality in drinking-water supply systems in areas affected by Typhoon Haiyan immediately following and 10 months after the typhoon. Methods Water quality testing and risk assessments of the drinking-water systems were conducted three weeks and 10 months post-Haiyan. Portable test kits were used to determine the presence of Escherichia coli and the level of residual chlorine in water samples. The level of risk was fed back to the water operators for their action. Results Of the 121 water samples collected three weeks post-Haiyan, 44% were contaminated, while 65% (244/373) of samples were found positive for E. coli 10 months post-Haiyan. For the three components of drinking-water systems – source, storage and distribution – the proportions of contaminated systems were 70%, 67% and 57%, respectively, 10 months after Haiyan. Discussion Vulnerability to faecal contamination was attributed to weak water safety programmes in the drinking-water supply systems. Poor water quality can be prevented or reduced by developing and implementing a water safety plan for the systems. This, in turn, will help prevent waterborne disease outbreaks caused by contaminated water post-disaster. PMID:26767136

  2. Subjective Quality Assessment of Underwater Video for Scientific Applications

    PubMed Central

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-01-01

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400

  3. Assessing Impact of Weight on Quality of Life.

    PubMed

    Kolotkin, R L; Head, S; Hamilton, M; Tse, C K

    1995-01-01

    This paper is a preliminary report on the development of a new instrument, the Impact of Weight on Quality of Life (IWQOL) questionnaire, that assesses the effects of weight on various areas of life. We conducted two studies utilizing subjects in treatment for obesity at Duke University Diet and Fitness Center. The first study describes item development, assesses reliability, and compares pre- and post-treatment scores on the IWQOL. In the second study we examined the effects of body mass index (BMI), gender, and age on subjects' perceptions of impact of weight on quality of life. Results indicate adequate psychometric properties with test-retest reliabilities averaging .75 for single items, and .89 for scales. Scale internal consistency averaged .87. Post-treatment scores differed significantly from pre-treatment scores on all scales, indicating that treatment produced positive changes in impact of weight on quality of life. The results of the second study indicate that the impact of weight generally worsened as the patients' size increased. However for women there was no association between BMI and impact of weight on Self-Esteem and Sexual Life. Even at the lowest BMI tertile studied, women reported that weight had a substantial impact in these areas. There were also significant gender differences, with women showing greater impact of weight on Self-Esteem and Sexual Life compared with men. The impact of age was a bit surprising, with some areas showing positive changes and others showing no change.

  4. Automatic analysis of medial temporal lobe atrophy from structural MRIs for the early assessment of Alzheimer disease

    PubMed Central

    Calvini, Piero; Chincarini, Andrea; Gemme, Gianluca; Penco, Maria Antonietta; Squarcia, Sandro; Nobili, Flavio; Rodriguez, Guido; Bellotti, Roberto; Catanzariti, Ezio; Cerello, Piergiorgio; De Mitri, Ivan; Fantacci, Maria Evelina

    2009-01-01

    The purpose of this study is to develop a software for the extraction of the hippocampus and surrounding medial temporal lobe (MTL) regions from T1-weighted magnetic resonance (MR) images with no interactive input from the user, to introduce a novel statistical indicator, computed on the intensities in the automatically extracted MTL regions, which measures atrophy, and to evaluate the accuracy of the newly developed intensity-based measure of MTL atrophy to (a) distinguish between patients with Alzheimer disease (AD), patients with amnestic mild cognitive impairment (aMCI), and elderly controls by using established criteria for patients with AD and aMCI as the reference standard and (b) infer about the clinical outcome of aMCI patients. For the development of the software, the study included 61 patients with mild AD (17 men, 44 women; mean age±standard deviation (SD), 75.8 years±7.8; Mini Mental State Examination (MMSE) score, 24.1±3.1), 42 patients with aMCI (11 men, 31 women; mean age±SD, 75.2 years±4.9; MMSE score, 27.9±1.9), and 30 elderly healthy controls (10 men, 20 women; mean age±SD, 74.7 years±5.2; MMSE score, 29.1±0.8). For the evaluation of the statistical indicator, 150 patients with mild AD (62 men, 88 women; mean age±SD, 76.3 years±5.8; MMSE score, 23.2±4.1), 247 patients with aMCI (143 men, 104 women; mean age±SD, 75.3 years±6.7; MMSE score, 27.0±1.8), and 135 elderly healthy controls (61 men, 74 women; mean age±SD, 76.4 years±6.1). Fifty aMCI patients were evaluated every 6 months over a 3 year period to assess conversion to AD. For each participant, two subimages of the MTL regions were automatically extracted from T1-weighted MR images with high spatial resolution. An intensity-based MTL atrophy measure was found to separate control, MCI, and AD cohorts. Group differences wereassessed by using two-sample t test. Individual classification was analyzed by using receiver operating characteristic (ROC) curves. Compared to controls

  5. Surface-water quality assessment using hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Blanco, Alfonso; Roper, William E.; Gomez, Richard B.

    2003-08-01

    Hyperspectral imagining has been recently been used to obtain several water quality parameters in water bodies either inland or in oceans. Optical and thermal have proven that spatial and temporal information needed to track and understand trend changes for these water quality parameters will result in developing better management practices for improving water quality of water bodies. This paper will review water quality parameters Chlorophyll (Chl), Dissolved Organic Carbon (DOC), and Total Suspended Solids (TSS) obtained for the Sakonnet River in Narragansett Bay, Rhode Island using the AVIRIS Sensor. The AVIRIS Sensor should improve the assessment and the definition of locations and pollutant concentrations of point and non-point sources. It will provide for necessary monitoring data to follow the clean up efforts and locate the necessary water and wastewater infrastructure to eliminate these point and non-point sources. This hyperspectral application would enhance the evaluation by both point and non-point sources, improve upon and partially replace expenses, labor intensive field sampling, and allow for economical sampling and mapping of large geographical areas.

  6. Subjective quality assessment of numerically reconstructed compressed holograms

    NASA Astrophysics Data System (ADS)

    Ahar, Ayyoub; Blinder, David; Bruylants, Tim; Schretter, Colas; Munteanu, Adrian; Schelkens, Peter

    2015-09-01

    Recently several papers reported efficient techniques to compress digital holograms. Typically, the rate-distortion performance of these solutions was evaluated by means of objective metrics such as Peak Signal-to-Noise Ratio (PSNR) or the Structural Similarity Index Measure (SSIM) by either evaluating the quality of the decoded hologram or the reconstructed compressed hologram. Seen the specific nature of holograms, it is relevant to question to what extend these metrics provide information on the effective visual quality of the reconstructed hologram. Given that today no holographic display technology is available that would allow for a proper subjective evaluation experiment, we propose in this paper a methodology that is based on assessing the quality of a reconstructed compressed hologram on a regular 2D display. In parallel, we also evaluate several coding engines, namely JPEG configured with the default perceptual quantization tables and with uniform quantization tables, JPEG 2000, JPEG 2000 extended with arbitrary packet decompositions and direction-adaptive filters and H.265/HEVC configured in intra-frame mode. The experimental results indicate that the perceived visual quality and the objective measures are well correlated. Moreover, also the superiority of the HEVC and the extended JPEG 2000 coding engines was confirmed, particularly at lower bitrates.

  7. Storage stability and quality assessment of processed cereal brans.

    PubMed

    Sharma, Savita; Kaur, Satinder; Dar, B N; Singh, Baljit

    2014-03-01

    Quality improvement of cereal brans, a health promoting ingredient for functional foods is the emerging research concept due to their low shelf stability and presence of non-nutrient components. A study was conducted to evaluate the storage quality of processed milling industry byproducts so that these can be potentially utilized as a dietary fibre source. Different cereal brans (wheat, rice, barley and oat) were processed by dry, wet, microwave heating, extrusion cooking and chemical methods at variable conditions. Processed brans were stored in high density polyethylene (HDPE) pouches at ambient and refrigeration temperature. Quality assessments (moisture, free fatty acids, water activity and physical quality) of brans were done up to six months, at one month intervals. Free fatty acid content, moisture and water activity of the cereal brans remained stable during the entire storage period. Among treatments, extrusion processing is the most effective for stability. Processing treatments and storage temperature have the positive effect on extending the shelf life of all cereal brans. Therefore, processed cereal brans can be used as a dietary fortificant for the development of value added food products. PMID:24587536

  8. Assessment of river Po sediment quality by micropollutant analysis.

    PubMed

    Camusso, Marina; Galassi, Silvana; Vignati, Davide

    2002-05-01

    Trace metals, PCB congeners and DDT homologues were determined in composite sediment samples collected from 10 representative sites along the river Po in two separate seasons. The aim was to identify the most anthropogenically impacted areas for future monitoring programmes and to aid development of Italian sediment quality criteria. The surface samples were collected during low flow conditions. Trace metal concentrations were assayed by electrothermal (Cd, Co, Cr, Cu, Ni, Pb), flame (Fe, Mn, Zn) or hydride generation (As) atomic absorption spectrometry after microwave assisted acid digestion. Hg was determined on solid samples by automated analyser. Organic microcontaminants were determined by gas-chromatography with 63Ni electron capture detector after Soxhlet extraction. Concentrations of trace metals, total PCB and DDT homologues showed two distinct peaks at the sites immediately downstream of Turin and Milan, respectively, and in each case decreased progressively further downstream. Principal component analysis identified three major factors (from a multi-dimensional space of 35 variables) which explained 85-90% of the total observed variance. The first and second factors corresponded to anthropogenic inputs and geological factors on sediment quality; the third included seasonal processes of minor importance. Sediment quality assessment identified Cd, Cu, Hg, Pb, Zn and organic microcontaminants as posing the most serious threats to river sediment quality. A reference site within the Po basin provided useful background values. Moderate pollution by organochlorine compounds was ascribed both to local sources and to atmospheric deposition. PMID:12153015

  9. Assessment of groundwater quality status in Amini Island of Lakshadweep.

    PubMed

    Prasad, N B Narasimha; Mansoor, O A

    2005-01-01

    Amini Island is one of the 10 inhabited islands in Lakshadweep. Built on the ancient volcanic formations Lakshadweep is the the tiniest Union Territory of India. The major problem experienced by the islanders is the acute scarcity of fresh drinking water. Groundwater is the only source of fresh water and the availability of the same is very restricted due to peculiar hydrologic, geologic, geomorphic and demographic features. Hence, proper understanding of the groundwater quality, with reference to temporal and spatial variations, is very important to meet the increasing demand and also to formulate future plans for groundwater development. In this context, the assessment of groundwater quality status was carried out in Amini Island. All the available information on water quality, present groundwater usage pattern, etc. was collected and analyzed. Total hardness and salinity are found to be the most critical water quality parameters exceeding the permissible limits of drinking water standards. Spatial variation diagrams of salinity and hardness have been prepared for different seasons. It is also observed from these maps that the salinity and hardness are comparatively better on the lagoon side compared to the seaside. These maps also suggest that the salinity and the hardness problem is more in the southern tip compared to northern portion.

  10. CBCT-based bone quality assessment: are Hounsfield units applicable?

    PubMed Central

    Jacobs, R; Singer, S R; Mupparapu, M

    2015-01-01

    CBCT is a widely applied imaging modality in dentistry. It enables the visualization of high-contrast structures of the oral region (bone, teeth, air cavities) at a high resolution. CBCT is now commonly used for the assessment of bone quality, primarily for pre-operative implant planning. Traditionally, bone quality parameters and classifications were primarily based on bone density, which could be estimated through the use of Hounsfield units derived from multidetector CT (MDCT) data sets. However, there are crucial differences between MDCT and CBCT, which complicates the use of quantitative gray values (GVs) for the latter. From experimental as well as clinical research, it can be seen that great variability of GVs can exist on CBCT images owing to various reasons that are inherently associated with this technique (i.e. the limited field size, relatively high amount of scattered radiation and limitations of currently applied reconstruction algorithms). Although attempts have been made to correct for GV variability, it can be postulated that the quantitative use of GVs in CBCT should be generally avoided at this time. In addition, recent research and clinical findings have shifted the paradigm of bone quality from a density-based analysis to a structural evaluation of the bone. The ever-improving image quality of CBCT allows it to display trabecular bone patterns, indicating that it may be possible to apply structural analysis methods that are commonly used in micro-CT and histology. PMID:25315442

  11. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  12. An assessment of California wildfire effects on air quality

    NASA Astrophysics Data System (ADS)

    Sermondadaz, S. M.; Jin, L.; Brown, N. J.

    2009-12-01

    Wildfires are a seasonal and recurrent problem in California. In addition to the damage caused each year and their heavy societal cost, wildfires may also have non-negligible effects on air quality. Most current studies usually focus on anthropogenic emissions impacts. Improved knowledge of the fires’ effect on various pollutant species -such as ozone, and ozone precursors, i.e. carbon monoxide, nitrogen oxides and volatile organic compounds - might be useful and relevant for control strategies and environmental policies. For this purpose, this model study uses the Community Multiscale Air Quality Model (CMAQ) to assess the effects of fire emissions as a perturbation, through the comparison of simulations performed with and without fire emissions. Emissions, boundaries and meteorological data used in this study are extracted from a severe fire episode in summer 2000. We assess the spread of ozone and its precursor pollutants (CO, NOX, VOC) around specifically chosen fire perimeters. Distribution of air pollutants in both horizontal and vertical dimensions is considered to achieve a better understanding of the pollutant formation and transport along the fire plumes. We assess how far fire emissions influence pollutant concentrations at surface and aloft. The impact of fire emissions depends on the fire size, its location and the meteorology associated with it. Our study provides information on ozone formation and transport caused by fire events, which may have implications for ozone violations in affected regions.

  13. Objective assessment of image quality VI: imaging in radiation therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Kupinski, Matthew A.; Müeller, Stefan; Halpern, Howard J.; Morris, John C., III; Dwyer, Roisin

    2013-11-01

    Earlier work on objective assessment of image quality (OAIQ) focused largely on estimation or classification tasks in which the desired outcome of imaging is accurate diagnosis. This paper develops a general framework for assessing imaging quality on the basis of therapeutic outcomes rather than diagnostic performance. By analogy to receiver operating characteristic (ROC) curves and their variants as used in diagnostic OAIQ, the method proposed here utilizes the therapy operating characteristic or TOC curves, which are plots of the probability of tumor control versus the probability of normal-tissue complications as the overall dose level of a radiotherapy treatment is varied. The proposed figure of merit is the area under the TOC curve, denoted AUTOC. This paper reviews an earlier exposition of the theory of TOC and AUTOC, which was specific to the assessment of image-segmentation algorithms, and extends it to other applications of imaging in external-beam radiation treatment as well as in treatment with internal radioactive sources. For each application, a methodology for computing the TOC is presented. A key difference between ROC and TOC is that the latter can be defined for a single patient rather than a population of patients.

  14. Electronic Quality of Life Assessment Using Computer-Adaptive Testing

    PubMed Central

    2016-01-01

    Background Quality of life (QoL) questionnaires are desirable for clinical practice but can be time-consuming to administer and interpret, making their widespread adoption difficult. Objective Our aim was to assess the performance of the World Health Organization Quality of Life (WHOQOL)-100 questionnaire as four item banks to facilitate adaptive testing using simulated computer adaptive tests (CATs) for physical, psychological, social, and environmental QoL. Methods We used data from the UK WHOQOL-100 questionnaire (N=320) to calibrate item banks using item response theory, which included psychometric assessments of differential item functioning, local dependency, unidimensionality, and reliability. We simulated CATs to assess the number of items administered before prespecified levels of reliability was met. Results The item banks (40 items) all displayed good model fit (P>.01) and were unidimensional (fewer than 5% of t tests significant), reliable (Person Separation Index>.70), and free from differential item functioning (no significant analysis of variance interaction) or local dependency (residual correlations < +.20). When matched for reliability, the item banks were between 45% and 75% shorter than paper-based WHOQOL measures. Across the four domains, a high standard of reliability (alpha>.90) could be gained with a median of 9 items. Conclusions Using CAT, simulated assessments were as reliable as paper-based forms of the WHOQOL with a fraction of the number of items. These properties suggest that these item banks are suitable for computerized adaptive assessment. These item banks have the potential for international development using existing alternative language versions of the WHOQOL items. PMID:27694100

  15. Quality of bone healing: perspectives and assessment techniques.

    PubMed

    Guda, Teja; Labella, Carl; Chan, Rodney; Hale, Robert

    2014-05-01

    Bone regeneration and healing is an area of extensive research providing an ever-expanding set of not only therapeutic solutions for surgeons but also diagnostic tools. Multiple factors such as an ideal graft, the appropriate biochemical and mechanical wound environment, and viable cell populations are essential components in promoting healing. While bony tissue performs many functions, critical is mechanical strength, followed closely by structure. Many tools are available to evaluate bone quality in terms of quantity, structure, and strength; the purpose of this article is to identify the factors that can be evaluated and the advantages and disadvantages of each in assessing the quality of bone healing in both preclinical research and clinical settings.

  16. Water quality assessment at Omerli Dam using remote sensing techniques.

    PubMed

    Alparslan, Erhan; Aydöner, Cihangir; Tufekci, Vildan; Tüfekci, Hüseyin

    2007-12-01

    Water quality at Omerli Dam, which is a vital potable water resource of Istanbul City, Turkey was assessed using the first four bands of Landsat 7-ETM satellite data, acquired in May 2001 and water quality parameters, such as chlorophyll-a, suspended solid matter, secchi disk and total phosphate measured at several measurement stations at Omerli Dam during satellite image acquisition time and archived at the Marine Pollution and Ecotoxicology laboratory of the Marmara Research Center, where this study was carried out. Establishing a relationship between this data, and the pixel reflectance values in the satellite image, chlorophyll-a, suspended solid matter, secchi disk and total phosphate maps were produced for the Omerli Dam.

  17. Metrics for assessing the quality of value sets in clinical quality measures

    PubMed Central

    Winnenburg, Rainer; Bodenreider, Olivier

    2013-01-01

    Objective: To assess the quality of value sets in clinical quality measures, both individually and as a population of value sets. Materials and methods: The concepts from a given value set are expected to be rooted by one or few ancestor concepts and the value set is expected to contain all the descendants of its root concepts and only these descendants. (1) We assessed the completeness and correctness of individual value sets by comparison to the extension derived from their roots. (2) We assessed the non-redundancy of value sets for the entire population of value sets (within a given code system) using the Jaccard similarity measure. Results: We demonstrated the utility of our approach on some cases of inconsistent value sets and produced a list of 58 potentially duplicate value sets from the current set of clinical quality measures for the 2014 Meaningful Use criteria. Conclusion: These metrics are easy to compute and provide compact indicators of the completeness, correctness, and non-redundancy of value sets. PMID:24551422

  18. A framework for assessing Health Economic Evaluation (HEE) quality appraisal instruments

    PubMed Central

    2012-01-01

    Background Health economic evaluations support the health care decision-making process by providing information on costs and consequences of health interventions. The quality of such studies is assessed by health economic evaluation (HEE) quality appraisal instruments. At present, there is no instrument for measuring and improving the quality of such HEE quality appraisal instruments. Therefore, the objectives of this study are to establish a framework for assessing the quality of HEE quality appraisal instruments to support and improve their quality, and to apply this framework to those HEE quality appraisal instruments which have been subject to more scrutiny than others, in order to test the framework and to demonstrate the shortcomings of existing HEE quality appraisal instruments. Methods To develop the quality assessment framework for HEE quality appraisal instruments, the experiences of using appraisal tools for clinical guidelines are used. Based on a deductive iterative process, clinical guideline appraisal instruments identified through literature search are reviewed, consolidated, and adapted to produce the final quality assessment framework for HEE quality appraisal instruments. Results The final quality assessment framework for HEE quality appraisal instruments consists of 36 items organized within 7 dimensions, each of which captures a specific domain of quality. Applying the quality assessment framework to four existing HEE quality appraisal instruments, it is found that these four quality appraisal instruments are of variable quality. Conclusions The framework described in this study should be regarded as a starting point for appraising the quality of HEE quality appraisal instruments. This framework can be used by HEE quality appraisal instrument producers to support and improve the quality and acceptance of existing and future HEE quality appraisal instruments. By applying this framework, users of HEE quality appraisal instruments can become aware

  19. Assessment of selected ground-water-quality data in Montana

    SciTech Connect

    Davis, R.E.; Rogers, G.D.

    1984-09-01

    This study was conducted to assess the existing, computer-accessible, ground-water-quality data for Montana. All known sources of ground-water-quality data were reviewed. Although the estimated number of analyses exceeds 25,000, more than three-fourths of the data were not suitable for this study. The only data used were obtained from the National Water Data Storage and Retrieval System (WATSTORE) of the US Geological Survey, because the chemical analyses generally are complete, have an assigned geohydrologic unit or source of water, and are accessible by computer. The data were assessed by geographic region of the State because of climatic and geologic differences. These regions consist of the eastern plains region and the western mountainous region. Within each region, the data were assessed according to geohydrologic unit. The number and areal distribution of data sites for some groupings of units are inadequate to be representative, particularly for groupings stratigraphically below the Upper Cretaceous Fox Hills Sandstone and Hell Creek Formation in the eastern region and for Quaternary alluvium, terrace deposits, glacial deposits, and associated units in the western region. More than one-half the data for the entire State are for the Tertiary Wasatch, Fort Union, and associated units in the eastern region. The results of statistical analyses of data in WATSTORE indicate that the median dissolved-solids concentration for the groupings of geohydrologic units ranges from about 400 to 5000 milligrams per liter in the eastern region and from about 100 to 200 milligrams per liter in the western region. Concentrations of most trace constituents do not exceed the primary drinking-water standards of the US Environmental Protection Agency. The data in WATSTORE for organic constituents presently are inadequate to detect any organic effects of man's activities on ground-water quality. 26 figs., 79 tabs.

  20. External quality assessment for immunohistochemistry: experiences from NordiQC.

    PubMed

    Nielsen, S

    2015-07-01

    Immunohistochemistry (IHC) is applied routinely in surgical and clinical pathology, because it is essential for diagnosis and sub-classification of many neoplastic lesions. Despite its extensive use for more than 40 years, lack of standardization is a major problem; many factors during the pre-analytical, analytical and post-analytical phases affect the final results. Nordic immunohistochemical Quality Control (NordiQC) was established in 2003 to evaluate the inter-laboratory consistency of IHC, focusing mainly on the analytical part. More than 26,000 IHC slides have been evaluated during the period 2003-2013; 15 - 300 laboratories have participated in each assessment. Overall, 71% of the staining results assessed have been evaluated as sufficient for diagnostic use, while 29% were judged insufficient. All IHC protocols used for the stained slides submitted to NordiQC have been evaluated by focusing on the technical calibration performed by the laboratories, and specific parameters that gave sufficient or insufficient results have been identified. The most common causes for insufficient results were: inadequate calibration of the primary antibody, use of an inadequate primary antibody, inappropriate choice of epitope retrieval method, insufficient heat induced epitope retrieval (HIER) and use of an inadequate detection kit. Approximately 90% of the insufficient results were characterized by either a signal that was too weak or false negative staining, whereas in the remaining 10%, a poor signal-to-noise ratio or false positive staining was seen. Identification of positive and negative tissue controls to ensure appropriate calibration of the IHC assay combined with individually tailored suggestions for protocol optimization have improved IHC staining for many markers and thus inter-laboratory consistency of the IHC results. The overall data generated by NordiQC during 11 years clearly indicates that external quality assessment is a valuable and necessary supplement

  1. Assessing the quality of topography from stereo-photoclinometry

    NASA Astrophysics Data System (ADS)

    Barnouin, O.; Gaskell, R.; Kahn, E.; Ernst, C.; Daly, M.; Bierhaus, E.; Johnson, C.; Clark, B.; Lauretta, D.

    2014-07-01

    Stereo-photoclinometry (SPC) has been used extensively to determine the shape and topography of various asteroids from image data. This technique will be used as one of two main approaches for determining the shape and topography of the asteroid Bennu, the target of the Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission. The purpose of this study is to evaluate the quality of SPC products derived from the Near-Earth Asteroid Rendezvous (NEAR) mission, whose suite of imaging data resembles that to be collected by OSIRIS-REx. We make use of the NEAR laser range-finder (NLR) to independently assess SPC's accuracy and precision.

  2. Quality Assessment of Vertical Angular Deviations for Photometer Calibration Benches

    NASA Astrophysics Data System (ADS)

    Silva Ribeiro, A.; Costa Santos, A.; Alves Sousa, J.; Forbes, A. B.

    2015-02-01

    Lighting, both natural and electric, constitutes one of the most important aspects of the life of human beings, allowing us to see and perform our daily tasks in outdoor and indoor environments. The safety aspects of lighting are self-evident in areas such as road lighting, urban lighting and also indoor lighting. The use of photometers to measure lighting levels requires traceability obtained in accredited laboratories, which must provide an associated uncertainty. It is therefore relevant to study the impact of known uncertainty sources like the vertical angular deviation of photometer calibration benches, in order to define criteria to its quality assessment.

  3. Quality of life assessment in gastro-oesophageal reflux disease.

    PubMed

    Irvine, E J

    2004-05-01

    Health related quality of life (HRQoL) is determined by both disease and non-disease related factors. Several studies have reported significant HRQoL impairment in GORD patients compared with the general population. Disease severity correlates strongly with HRQoL. Non-disease features, such as the presence of anxiety and comorbid conditions, also negatively impact on HRQoL. Combining a generic and disease specific instrument may avoid missing unexpected outcomes and ensure recognition of all clinically important changes. Full validation of assessment tools is critical. Long term, as well as short term, evaluation is important and is critical when undertaking comparative pharmacoeconomic evaluations.

  4. Assessment of University Gynaecology Clinics Based on Quality Reports.

    PubMed

    Solomayer, E F; Rody, A; Wallwiener, D; Beckmann, M W

    2013-07-01

    Introduction: Quality reporting was initially implemented to offer a better means of assessing hospitals and to provide patients with information to help them when choosing their hospital. Quality reports are published every 2 years and include parameters describing the hospital's structure and general infrastructure together with specific data on individual specialised departments or clinics. Method: This study investigated the 2010 quality reports of German university hospitals published online, focussing on the following data: number of inpatients treated by the hospital, focus of care provided by the unit/department, range of medical services and care provided by the unit/department, non-medical services provided by the unit/department, number of cases treated in the unit/department, ICD diagnoses, OPS procedures, number of outpatient procedures, day surgeries as defined by Section 115b SGB V, presence of an accident insurance consultant and number of staff employed. Results: University gynaecology clinics (UGCs) treat 10 % (range: 6-17 %) of all inpatients of their respective university hospital. There were no important differences in infrastructure between clinics. All UGCs offered full medical care and were specialist clinics for gynaecology (surgery, breast centres, genital cancer, urogynaecology, endoscopy), obstetrics (prenatal diagnostics, high-risk obstetrics); many were also specialist clinics for endocrinology and reproductive medicine. On average, each clinic employs 32 physicians (range: 16-78). Half of them (30-77 %) are specialists. Around 171 (117-289) inpatients are treated on average per physician. The most common ICD coded treatments were deliveries and treatment of infants. Gynaecological diagnoses are underrepresented. Summary: UGCs treat 10 % of all inpatients treated in university hospitals, making them important ports of entry for their respective university hospital. Around half of the physicians are specialists. Quality reports

  5. Use of Landsat data to assess waterfowl habitat quality

    USGS Publications Warehouse

    Colwell, J.E.; Gilmer, D.S.; Work, E.A.; Rebel, D.

    1978-01-01

    This report is a discussion of the feasibility of using Landsat data to generate information of value for effective management of migratory waterfowl. Effective management of waterfowl includes regulating waterfowl populations through hunting regulations and habitat management. This report examines the ability to analyze annual production by monitoring the number of breeding and brood ponds that are present, and the ability to assess waterfowl habitat based on the various relationships between ponds and the surrounding upland terrain types. The basic conclusions of this report are that: 1) Landsat data can be used to improve estimates of pond numbers which may be correlated with duck production; and 2) Landsat data can be used to generate information on terrain types which subsequently can be used to assess relative waterfowl habitat quality.

  6. Use of Thematic Mapper for water quality assessment

    NASA Technical Reports Server (NTRS)

    Horn, E. M.; Morrissey, L. A.

    1984-01-01

    The evaluation of simulated TM data obtained on an ER-2 aircraft at twenty-five predesignated sample sites for mapping water quality factors such as conductivity, pH, suspended solids, turbidity, temperature, and depth, is discussed. Using a multiple regression for the seven TM bands, an equation is developed for the suspended solids. TM bands 1, 2, 3, 4, and 6 are used with logarithm conductivity in a multiple regression. The assessment of regression equations for a high coefficient of determination (R-squared) and statistical significance is considered. Confidence intervals about the mean regression point are calculated in order to assess the robustness of the regressions used for mapping conductivity, turbidity, and suspended solids, and by regressing random subsamples of sites and comparing the resultant range of R-squared, cross validation is conducted.

  7. 42 CFR 425.500 - Measures to assess the quality of care furnished by an ACO.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE SHARED SAVINGS PROGRAM Quality Performance Standards and Reporting § 425.500 Measures to assess the quality of care furnished by an ACO. (a) General. CMS establishes quality performance measures to assess the quality of...

  8. 42 CFR 425.500 - Measures to assess the quality of care furnished by an ACO.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE SHARED SAVINGS PROGRAM Quality Performance Standards and Reporting § 425.500 Measures to assess the quality of care furnished by an ACO. (a) General. CMS establishes quality performance measures to assess the quality of...

  9. Quality Assessment of University Studies as a Service: Dimensions and Criteria

    ERIC Educational Resources Information Center

    Pukelyte, Rasa

    2010-01-01

    This article reviews a possibility to assess university studies as a service. University studies have to be of high quality both in their content and in the administrative level. Therefore, quality of studies as a service is an important constituent part of study quality assurance. When assessing quality of university studies as a service, it is…

  10. Assessment of porous asphalt pavement performance: hydraulics and water quality

    NASA Astrophysics Data System (ADS)

    Briggs, J. F.; Ballestero, T. P.; Roseen, R. M.; Houle, J. J.

    2005-05-01

    The objective of this study is to focus on the water quality treatment and hydraulic performance of a porous asphalt pavement parking lot in Durham, New Hampshire. The site was constructed in October 2004 to assess the suitability of porous asphalt pavement for stormwater management in cold climates. The facility consists of a 4-inch asphalt open-graded friction course layer overlying a high porosity sand and gravel base. This base serves as a storage reservoir in-between storms that can slowly infiltrate groundwater. Details on the design, construction, and cost of the facility will be presented. The porous asphalt pavements is qualitatively monitored for signs of distress, especially those due to cold climate stresses like plowing, sanding, salting, and freeze-thaw cycles. Life cycle predictions are discussed. Surface infiltration rates are measured with a constant head device built specifically to test high infiltration capacity pavements. The test measures infiltration rates in a single 4-inch diameter column temporarily sealed to the pavement at its base. A surface inundation test, as described by Bean, is also conducted as a basis for comparison of results (Bean, 2004). These tests assess infiltration rates soon after installation, throughout the winter, during snowmelt, after a winter of salting, sanding, and plowing, and after vacuuming in the spring. Frost penetration into the subsurface reservoir is monitored with a frost gauge. Hydrologic effects of the system are evaluated. Water levels are monitored in the facility and in surrounding wells with continuously logging pressure transducers. The 6-inch underdrain pipe that conveys excess water in the subsurface reservoir to a riprap pad is also continuously monitored for flow. Since porous asphalt pavement systems infiltrate surface water into the subsurface, it is important to assess whether water quality treatment performance in the subsurface reservoir is adequate. The assumed influent water quality is

  11. A tool for a comprehensive assessment of treated wastewater quality.

    PubMed

    Silva, Catarina; Quadros, Sílvia; Ramalho, Pedro; Rosa, Maria João

    2014-12-15

    The main goal of a wastewater treatment plant (WWTP) is to comply with the treated wastewater (TWW) quality requirements. However, the assessment of this compliance is a rather complex process for WWTPs in the EU Member States, since it requires the integration of a large volume of data and several criteria according to EU Directives 91/271/EEC and 2000/60/EC. A tool for a comprehensive assessment of TWW quality in this context is herein presented. The tool's novelty relies on an integrated analysis of performance indicators (PIs) and new performance indices (PXs). PIs integrate the several compliance criteria into a single framework, supported by flowcharts for a straightforward assessment of TWW compliance by practitioners. PXs are obtained by applying a performance function to the concentration values analysed in the TWW for discharge or reuse. PXs are dimensionless and the scale adopted (0-300) defines three performance levels: unsatisfactory, acceptable and good performance. The reference values proposed for these levels and for the PIs were based on the EU legislation. The PXs complement the information provided by the PIs. While the latter assess the plant effectiveness in a given year (i.e. the TWW compliance with the requirements), PXs tackle the plant reliability, i.e. they allow to easily compare the performance of different parameters over the time and to identify when the performance did satisfy or fail the pre-established objectives and the distance that remains to achieve these targets. The tool was tested in 17 WWTPs and the most representative results are herein illustrated.

  12. Agreement in Quality of Life Assessment between Adolescents with Intellectual Disability and Their Parents

    ERIC Educational Resources Information Center

    Golubovic, Spela; Skrbic, Renata

    2013-01-01

    Intellectual disability affects different aspects of functioning and quality of life, as well as the ability to independently assess the quality of life itself. The paper examines the agreement in the quality of life assessments made by adolescents with intellectual disability and their parents compared with assessments made by adolescents without…

  13. Transcription factor motif quality assessment requires systematic comparative analysis

    PubMed Central

    Kibet, Caleb Kipkurui; Machanick, Philip

    2016-01-01

    Transcription factor (TF) binding site prediction remains a challenge in gene regulatory research due to degeneracy and potential variability in binding sites in the genome. Dozens of algorithms designed to learn binding models (motifs) have generated many motifs available in research papers with a subset making it to databases like JASPAR, UniPROBE and Transfac. The presence of many versions of motifs from the various databases for a single TF and the lack of a standardized assessment technique makes it difficult for biologists to make an appropriate choice of binding model and for algorithm developers to benchmark, test and improve on their models. In this study, we review and evaluate the approaches in use, highlight differences and demonstrate the difficulty of defining a standardized motif assessment approach. We review scoring functions, motif length, test data and the type of performance metrics used in prior studies as some of the factors that influence the outcome of a motif assessment. We show that the scoring functions and statistics used in motif assessment influence ranking of motifs in a TF-specific manner. We also show that TF binding specificity can vary by source of genomic binding data. We also demonstrate that information content of a motif is not in isolation a measure of motif quality but is influenced by TF binding behaviour. We conclude that there is a need for an easy-to-use tool that presents all available evidence for a comparative analysis. PMID:27092243

  14. Chemometrics quality assessment of wastewater treatment plant effluents using physicochemical parameters and UV absorption measurements.

    PubMed

    Platikanov, S; Rodriguez-Mozaz, S; Huerta, B; Barceló, D; Cros, J; Batle, M; Poch, G; Tauler, R

    2014-07-01

    Chemometric techniques like Principal Component Analysis (PCA) and Partial Least Squares Regression (PLS) are used to explore, analyze and model relationships among different water quality parameters in wastewater treatment plants (WWTP). Different data sets generated by laboratory analysis and by an automatic multi-parametric monitoring system with a new designed optical device have been investigated for temporal variations on water quality parameters measured in the water influent and effluent of a WWTP over different time scales. The obtained results allowed the discovery of the more important relationships among the monitored parameters and of their cyclic dependence on time (daily, monthly and annual cycles) and on different plant management procedures. This study intended also the modeling and prediction of concentrations of several water components and parameters, especially relevant for water quality assessment, such as Dissolved Organic Matter (DOM), Total Organic Carbon (TOC) nitrate, detergent, and phenol concentrations. PLS models were built to correlate target concentrations of these constituents with UV spectra measured in samples collected at (1) laboratory conditions (in synthetic water mixtures); and at (2) WWTP conditions (in real water samples from the plant). Using synthetic water mixtures, specific wavelengths were selected with the aim to establish simple and reliable prediction models, which gave good relative predictions with errors of around 3-4% for nitrates, detergent and phenols concentrations and of around 15% for the DOM in external validation. In the case of nitrate and TOC concentrations modeling in real water samples from the effluent of the WWTP using the reduced spectral data set, results were also promising with low prediction errors (less than 20%). PMID:24726963

  15. Chemometrics quality assessment of wastewater treatment plant effluents using physicochemical parameters and UV absorption measurements.

    PubMed

    Platikanov, S; Rodriguez-Mozaz, S; Huerta, B; Barceló, D; Cros, J; Batle, M; Poch, G; Tauler, R

    2014-07-01

    Chemometric techniques like Principal Component Analysis (PCA) and Partial Least Squares Regression (PLS) are used to explore, analyze and model relationships among different water quality parameters in wastewater treatment plants (WWTP). Different data sets generated by laboratory analysis and by an automatic multi-parametric monitoring system with a new designed optical device have been investigated for temporal variations on water quality parameters measured in the water influent and effluent of a WWTP over different time scales. The obtained results allowed the discovery of the more important relationships among the monitored parameters and of their cyclic dependence on time (daily, monthly and annual cycles) and on different plant management procedures. This study intended also the modeling and prediction of concentrations of several water components and parameters, especially relevant for water quality assessment, such as Dissolved Organic Matter (DOM), Total Organic Carbon (TOC) nitrate, detergent, and phenol concentrations. PLS models were built to correlate target concentrations of these constituents with UV spectra measured in samples collected at (1) laboratory conditions (in synthetic water mixtures); and at (2) WWTP conditions (in real water samples from the plant). Using synthetic water mixtures, specific wavelengths were selected with the aim to establish simple and reliable prediction models, which gave good relative predictions with errors of around 3-4% for nitrates, detergent and phenols concentrations and of around 15% for the DOM in external validation. In the case of nitrate and TOC concentrations modeling in real water samples from the effluent of the WWTP using the reduced spectral data set, results were also promising with low prediction errors (less than 20%).

  16. Indoor Air Quality Assessment of the San Francisco Federal Building

    SciTech Connect

    Apte, Michael; Bennett, Deborah H.; Faulkner, David; Maddalena, Randy L.; Russell, Marion L.; Spears, Michael; Sullivan, Douglas P; Trout, Amber L.

    2008-07-01

    An assessment of the indoor air quality (IAQ) of the San Francisco Federal Building (SFFB) was conducted on May 12 and 14, 2009 at the request of the General Services Administration (GSA). The purpose of the assessment was for a general screening of IAQ parameters typically indicative of well functioning building systems. One naturally ventilated space and one mechanically ventilated space were studied. In both zones, the levels of indoor air contaminants, including CO2, CO, particulate matter, volatile organic compounds, and aldehydes, were low, relative to reference exposure levels and air quality standards for comparable office buildings. We found slightly elevated levels of volatile organic compounds (VOCs) including two compounds often found in"green" cleaning products. In addition, we found two industrial solvents at levels higher than typically seen in office buildings, but the levels were not sufficient to be of a health concern. The ventilation rates in the two study spaces were high by any standard. Ventilation rates in the building should be further investigated and adjusted to be in line with the building design. Based on our measurements, we conclude that the IAQ is satisfactory in the zone we tested, but IAQ may need to be re-checked after the ventilation rates have been lowered.

  17. Quality assessment of butter cookies applying multispectral imaging.

    PubMed

    Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne

    2013-07-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4-16 min and 160-200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400-700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center.

  18. Quality assessment of perinatal and infant postmortem examinations in Turkey.

    PubMed

    Pakis, Isil; Karapirli, Mustafa; Karayel, Ferah; Turan, Arzu; Akyildiz, Elif; Polat, Oguz

    2008-09-01

    An autopsy examination is important in identifying the cause of death and as a means of auditing clinical and forensic practice; however, especially in perinatal and infantile age groups determining the cause of death leads to some difficulties in autopsy practice. In this study, 15,640 autopsies recorded during the years 2000-2004 in the Mortuary Department of the Council of Forensic Medicine were reviewed. Autopsy findings of 510 cases between 20 completed weeks of gestation and 1 year of age were analyzed retrospectively. The quality of each necropsy report was assessed using a modification of the system gestational age assessment described by Rushton, which objectively scores aspects identified by the Royal College of Pathologists as being part of a necropsy. According to their ages, the cases were subdivided into three groups. Intrauterine deaths were 31% (158 cases), neonatal deaths were 24% (123 cases), and infantile deaths were 45% (229 cases) of all cases. Scores for the quality of the necropsy report were above the minimum acceptable score with 44% in intrauterine, 88% in neonatal and infantile deaths.

  19. Assessing Raw and Treated Water Quality Using Fluorescence Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bridgeman, J.; Baker, A.

    2006-12-01

    To date, much fluorescence spectroscopy work has focused on the use of techniques to characterize pollution in river water and to fingerprint pollutants such as, inter alia, treated and raw sewage effluent. In the face of tightening water quality standards associated with disinfection byproducts, there exists the need for a surrogate THM parameter which can be measured accurately and quickly at the water treatment works and which will give a satisfactory indication of the THM concentration leaving the water treatment works. In addition, water treatment works and distribution system managers require tools which are simple and quick, yet robust, to monitor plant and unit process performance. We extend the use of fluorescence techniques from raw water quality monitoring to (1) the monitoring of water treatment works intakes and the assessment of water treatment works performance by (2) assessing the removal of dissolved organic matter (DOM) through the unit process stages of various water treatment works treating different raw waters and (3) examining the prevalence of microbiological activity found at service reservoirs in the downstream distribution system. 16 surface water treatment works were selected in the central region of the UK and samples taken at works' intakes, downstream of each unit process, and in the distribution systems. The intakes selected abstract water from a broad range of upland and lowland water sources with varying natural and anthropogenic pollutant inputs and significantly different flows. The treatment works selected offer a range of different, but relatively standard, unit processes. The results demonstrate that raw waters exhibit more fluorescence than (partially) treated waters. However, noticeable differences between each site are observed. Furthermore, differences in unit process performance between works are also identified and quantified. Across all sites, treatment with Granular Activated Carbon is found to yield a significant

  20. Can water quality of tubewells be assessed without chemical testing?

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammad A.; Butler, Adrian P.

    2016-04-01

    Arsenic is one of the major pollutants found in aquifers on a global scale. The screening of tubewells for arsenic has helped many people to avoid drinking from highly polluted wells in the Bengal Delta (West Bengal and Bangladesh). However, there are still many millions of tubewells in Bangladesh yet to be tested, and a substantial proportion of these are likely to contain excessive arsenic. Due to the level of poverty and lack of infrastructure, it is unlikely that the rest of the tubewells will be tested quickly. However, water quality assessment without needing a chemical testing may be helpful in this case. Studies have found that qualitative factors, such as staining in the tubewell basement and/or on utensils, can indicate subsurface geology and water quality. The science behind this staining is well established, red staining is associated with iron reduction leading to release of arsenic whilst black staining is associated with manganese reduction (any release of arsenic due to manganese reduction is sorbed back on the, yet to be reduced, iron), whereas mixed staining may indicate overlapping manganese and iron reduction at the tubewell screen. Reduction is not uniform everywhere and hence chemical water quality including dissolved arsenic varies from place to place. This is why coupling existing tubewell arsenic information with user derived staining data could be useful in predicting the arsenic status at a particular site. Using well location, depth, along with colour of staining, an assessment of both good (nutrients) and bad (toxins and pathogens) substances in the tubewell could be provided. Social-network technology, combined with increasing use of smartphones, provides a powerful opportunity for both sharing and providing feedback to the user. Here we outline how a simple digital application can couple the reception both qualitative and quantitative tubewell data into a centralised interactive database and provide manipulated feedback to an

  1. Chip Design Process Optimization Based on Design Quality Assessment

    NASA Astrophysics Data System (ADS)

    Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel

    2010-06-01

    Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.

  2. State-Level Cancer Quality Assessment and Research

    PubMed Central

    Lipscomb, Joseph; Gillespie, Theresa W.

    2016-01-01

    Over a decade ago, the Institute of Medicine called for a national cancer data system in the United States to support quality-of-care assessment and improvement, including research on effective interventions. Although considerable progress has been achieved in cancer quality measurement and effectiveness research, the nation still lacks a population-based data infrastructure for accurately identifying cancer patients and tracking services and outcomes over time. For compelling reasons, the most effective pathway forward may be the development of state-level cancer data systems, in which central registry data are linked to multiple public and private secondary sources. These would include administrative/claims files from Medicare, Medicaid, and private insurers. Moreover, such a state-level system would promote rapid learning by encouraging adoption of near-real-time reporting and feedback systems, such as the Commission on Cancer’s new Rapid Quality Reporting System. The groundwork for such a system is being laid in the state of Georgia, and similar work is advancing in other states. The pace of progress depends on the successful resolution of issues related to the application of information technology, financing, and governance. PMID:21799333

  3. Smolt Quality Assessment of Spring Chinook Salmon : Annual Report.

    SciTech Connect

    Zaugg, Waldo S.

    1991-04-01

    The physiological development and physiological condition of spring chinook salmon are being studied at several hatcheries in the Columbia River Basin. The purpose of the study is to determine whether any or several smolt indices can be related to adult recovery and be used to improve hatchery effectiveness. The tests conducted in 1989 on juvenile chinook salmon at Dworshak, Leavenworth, and Warm Springs National Fish Hatcheries, and the Oregon State Willamette Hatchery assessed saltwater tolerance, gill ATPase, cortisol, insulin, thyroid hormones, secondary stress, fish morphology, metabolic energy stores, immune response, blood cell numbers, and plasma ion concentrations. The study showed that smolt development may have occurred before the fish were released from the Willamette Hatchery, but not from the Dworshak, Leavenworth, or Warm Springs Hatcheries. These results will be compared to adult recovery data when they become available, to determine which smolt quality indices may be used to predict adult recovery. The relative rankings of smolt quality at the different hatcheries do not necessarily reflect the competency of the hatchery managers and staff, who have shown a high degree of professionalism and expertise in fish rearing. We believe that the differences in smolt quality are due to the interaction of genetic and environmental factors. One aim of this research is to identify factors that influence smolt development and that may be controlled through fish husbandry to regulate smolt development. 35 refs., 27 figs., 5 tabs.

  4. Visual quality assessment of electrochromic and conventional glazings

    SciTech Connect

    Moeck, M.; Lee, E.S.; Rubin, M.D.; Sullivan, R.; Selkowitz, S.E.

    1996-09-01

    Variable transmission, ``switchable`` electrochromic glazings are compared to conventional static glazings using computer simulations to assess the daylighting quality of a commercial office environment where paper and computer tasks are performed. RADIANCE simulations were made for a west-facing commercial office space under clear and overcast sky conditions. This visualization tool was used to model different glazing types, to compute luminance and illuminance levels, and to generate a parametric set of photorealistic images of typical interior views at various times of the day and year. Privacy and visual display terminal (VDT) visibility is explored. Electrochromic glazings result in a more consistent glare-free daylit environment compared to their static counterparts. However, if the glazing is controlled to minimize glare or to maintain low interior daylight levels for critical visual tasks (e.g, VDT), occupants may object to the diminished quality of the outdoor view due to its low transmission (Tv = 0.08) during those hours. RADIANCE proved to be a very powerful tool to better understand some of the design tradeoffs of this emerging glazing technology. The ability to draw specific conclusions about the relative value of different technologies or control strategies is limited by the lack of agreed upon criteria or standards for lighting quality and visibility.

  5. An integrated modeling process to assess water quality for watersheds

    NASA Astrophysics Data System (ADS)

    Bhuyan, Samarjyoti

    2001-07-01

    An integrated modeling process has been developed that combines remote sensing, Geographic Information Systems (GIS), and the Agricultural NonPoint Source Pollution (AGNPS) hydrologic model to assess water quality of a watershed. Remotely sensed Landsat Thematic Mapper (TM) images were used to obtain various land cover information of a watershed including sub-classes of rangeland and wheat land based on the estimates of vegetative cover and crop residue, respectively. AGNPS model input parameters including Universal Soil Loss Equation's (USLE) cropping factors (C-factors) were assigned to the landcover classes. The AGNPS-ARC INFO interface was used to extract input parameters from several GIS layers for the AGNPS model during several selected storm events for the sub-watersheds. Measured surface water quantity and quality data for these storm events were obtained from U.S. Geological Survey (USGS) gaging stations. Base flow separation was done to remove the base flow fraction of water and total suspended sediment (TSS), total nitrogen (total-N), and total phosphorous (total-P) from the total stream flow. Continuous antecedent moisture content ratios were developed for the sub-watersheds during the storm events and were used to adjust the Soil Conservation Service-Curve Numbers (SCS-CN) of various landcovers. A relationship was developed between storm amounts and estimated energy intensity (EI) values using a probability method (Koelliker and Humbert, 1989), and the EI values were used in running the AGNPS model input files. Several model parameters were calibrated against the measured water quality data and then the model was run on different sub-watersheds to evaluate the modeling process. This modeling process was found to be effective for smaller sub-watersheds having adequate rainfall data. However, in the case of large sub-watersheds with substantial variations of rainfall and landcover, this process was less satisfactory. This integrated modeling process will

  6. Louisiana Quality Start Child Care Rating System: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Louisiana's Quality Start Child Care Rating System prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs;…

  7. Tennessee Star-Quality Child Care Program: QRS Profile. The Child Care Quality Rating System (QRS) Assessment

    ERIC Educational Resources Information Center

    Child Trends, 2010

    2010-01-01

    This paper presents a profile of Tennessee's Star-Quality Child Care Program prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4)…

  8. Microbiological assessment of indoor air quality at different hospital sites.

    PubMed

    Cabo Verde, Sandra; Almeida, Susana Marta; Matos, João; Guerreiro, Duarte; Meneses, Marcia; Faria, Tiago; Botelho, Daniel; Santos, Mateus; Viegas, Carla

    2015-09-01

    Poor hospital indoor air quality (IAQ) may lead to hospital-acquired infections, sick hospital syndrome and various occupational hazards. Air-control measures are crucial for reducing dissemination of airborne biological particles in hospitals. The objective of this study was to perform a survey of bioaerosol quality in different sites in a Portuguese Hospital, namely the operating theater (OT), the emergency service (ES) and the surgical ward (SW). Aerobic mesophilic bacterial counts (BCs) and fungal load (FL) were assessed by impaction directly onto tryptic soy agar and malt extract agar supplemented with antibiotic chloramphenicol (0.05%) plates, respectively using a MAS-100 air sampler. The ES revealed the highest airborne microbial concentrations (BC range 240-736 CFU/m(3) CFU/m(3); FL range 27-933 CFU/m(3)), exceeding, at several sampling sites, conformity criteria defined in national legislation [6]. Bacterial concentrations in the SW (BC range 99-495 CFU/m(3)) and the OT (BC range 12-170 CFU/m(3)) were under recommended criteria. While fungal levels were below 1 CFU/m(3) in the OT, in the SW (range 1-32 CFU/m(3)), there existed a site with fungal indoor concentrations higher than those detected outdoors. Airborne Gram-positive cocci were the most frequent phenotype (88%) detected from the measured bacterial population in all indoor environments. Staphylococcus (51%) and Micrococcus (37%) were dominant among the bacterial genera identified in the present study. Concerning indoor fungal characterization, the prevalent genera were Penicillium (41%) and Aspergillus (24%). Regular monitoring is essential for assessing air control efficiency and for detecting irregular introduction of airborne particles via clothing of visitors and medical staff or carriage by personal and medical materials. Furthermore, microbiological survey data should be used to clearly define specific air quality guidelines for controlled environments in hospital settings. PMID

  9. Microbiological assessment of indoor air quality at different hospital sites.

    PubMed

    Cabo Verde, Sandra; Almeida, Susana Marta; Matos, João; Guerreiro, Duarte; Meneses, Marcia; Faria, Tiago; Botelho, Daniel; Santos, Mateus; Viegas, Carla

    2015-09-01

    Poor hospital indoor air quality (IAQ) may lead to hospital-acquired infections, sick hospital syndrome and various occupational hazards. Air-control measures are crucial for reducing dissemination of airborne biological particles in hospitals. The objective of this study was to perform a survey of bioaerosol quality in different sites in a Portuguese Hospital, namely the operating theater (OT), the emergency service (ES) and the surgical ward (SW). Aerobic mesophilic bacterial counts (BCs) and fungal load (FL) were assessed by impaction directly onto tryptic soy agar and malt extract agar supplemented with antibiotic chloramphenicol (0.05%) plates, respectively using a MAS-100 air sampler. The ES revealed the highest airborne microbial concentrations (BC range 240-736 CFU/m(3) CFU/m(3); FL range 27-933 CFU/m(3)), exceeding, at several sampling sites, conformity criteria defined in national legislation [6]. Bacterial concentrations in the SW (BC range 99-495 CFU/m(3)) and the OT (BC range 12-170 CFU/m(3)) were under recommended criteria. While fungal levels were below 1 CFU/m(3) in the OT, in the SW (range 1-32 CFU/m(3)), there existed a site with fungal indoor concentrations higher than those detected outdoors. Airborne Gram-positive cocci were the most frequent phenotype (88%) detected from the measured bacterial population in all indoor environments. Staphylococcus (51%) and Micrococcus (37%) were dominant among the bacterial genera identified in the present study. Concerning indoor fungal characterization, the prevalent genera were Penicillium (41%) and Aspergillus (24%). Regular monitoring is essential for assessing air control efficiency and for detecting irregular introduction of airborne particles via clothing of visitors and medical staff or carriage by personal and medical materials. Furthermore, microbiological survey data should be used to clearly define specific air quality guidelines for controlled environments in hospital settings.

  10. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  11. Quality-control design for surface-water sampling in the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Mueller, David K.; Martin, Jeffrey D.; Lopes, Thomas J.

    1997-01-01

    The data-quality objectives of the National Water-Quality Assessment Program include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of chemical analyses of surface-water samples. The quality-control samples used to make these estimates include field blanks, field matrix spikes, and replicates. This report describes the design for collection of these quality-control samples in National Water-Quality Assessment Program studies and the data management needed to properly identify these samples in the U.S. Geological Survey's national data base.

  12. Towards a Fuzzy Expert System on Toxicological Data Quality Assessment.

    PubMed

    Yang, Longzhi; Neagu, Daniel; Cronin, Mark T D; Hewitt, Mark; Enoch, Steven J; Madden, Judith C; Przybylak, Katarzyna

    2013-01-01

    Quality assessment (QA) requires high levels of domain-specific experience and knowledge. QA tasks for toxicological data are usually performed by human experts manually, although a number of quality evaluation schemes have been proposed in the literature. For instance, the most widely utilised Klimisch scheme1 defines four data quality categories in order to tag data instances with respect to their qualities; ToxRTool2 is an extension of the Klimisch approach aiming to increase the transparency and harmonisation of the approach. Note that the processes of QA in many other areas have been automatised by employing expert systems. Briefly, an expert system is a computer program that uses a knowledge base built upon human expertise, and an inference engine that mimics the reasoning processes of human experts to infer new statements from incoming data. In particular, expert systems have been extended to deal with the uncertainty of information by representing uncertain information (such as linguistic terms) as fuzzy sets under the framework of fuzzy set theory and performing inferences upon fuzzy sets according to fuzzy arithmetic. This paper presents an experimental fuzzy expert system for toxicological data QA which is developed on the basis of the Klimisch approach and the ToxRTool in an effort to illustrate the power of expert systems to toxicologists, and to examine if fuzzy expert systems are a viable solution for QA of toxicological data. Such direction still faces great difficulties due to the well-known common challenge of toxicological data QA that "five toxicologists may have six opinions". In the meantime, this challenge may offer an opportunity for expert systems because the construction and refinement of the knowledge base could be a converging process of different opinions which is of significant importance for regulatory policy making under the regulation of REACH, though a consensus may never be reached. Also, in order to facilitate the implementation

  13. A Web-Based Assessment for Phonological Awareness, Rapid Automatized Naming (RAN) and Learning to Read Chinese

    ERIC Educational Resources Information Center

    Liao, Chen-Huei; Kuo, Bor-Chen

    2011-01-01

    The present study examined the equivalency of conventional and web-based tests in reading Chinese. Phonological awareness, rapid automatized naming (RAN), reading accuracy, and reading fluency tests were administered to 93 grade 6 children in Taiwan with both test versions (paper-pencil and web-based). The results suggest that conventional and…

  14. Some Aspects of the Technical Quality of Formative Assessments in Middle School Mathematics. CRESST Report 750

    ERIC Educational Resources Information Center

    Phelan, Julia; Kang, Taehoon; Niemi, David N.; Vendlinski, Terry; Choi, Kilchan

    2009-01-01

    While research suggests that formative assessment can be a powerful tool to support teaching and learning, efforts to jump on the formative assessment bandwagon have been more widespread than those to assure the technical quality of the assessments. This report covers initial analyses of data bearing on the quality of formative assessments in…

  15. Metabolomics: approaches to assessing oocyte and embryo quality.

    PubMed

    Singh, R; Sinclair, K D

    2007-09-01

    Morphological evaluation remains the primary method of embryo assessment during IVF cycles, but its modest predictive power and inherent inter- and intra-observer variability limits its value. Low-molecular weight metabolites represent the end products of cell regulatory processes and therefore reveal the response of biological systems to a variety of genetic, nutrient or environmental influences. It follows that the non-invasive quantification of oocyte and embryo metabolism, from the analyses of follicular fluid or culture media, may be a useful predictor of pregnancy outcome following embryo transfer, a potential supported by recent clinical studies working with specific classes of metabolites such as glycolytic intermediates and amino acids. Such selective approaches, however, whilst adhering closely to known cellular processes, may fail to harness the full potential of contemporary metabolomic methodologies, which can measure a wider spectrum of metabolites. However, an important technical drawback with many existing methodologies is the limited number of metabolites that can be determined by a single analytical platform. Vibrational spectroscopy methodologies such as Fourier transform infrared and near infrared spectroscopy may overcome these limitations by generating unique spectral signatures of functional groups and bonds, but their application in embryo quality assessment remains to be fully validated. Ultimately, a combination of evaluation criteria that include morphometry with metabolomics may provide the best predictive assessment of embryo viability.

  16. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  17. Image Quality and Radiation Dose of CT Coronary Angiography with Automatic Tube Current Modulation and Strong Adaptive Iterative Dose Reduction Three-Dimensional (AIDR3D)

    PubMed Central

    Shen, Hesong; Dai, Guochao; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Liang, Dan; Wang, Xinhua; Zhu, Dongyun; Li, Wenru; Qiu, Jianping

    2015-01-01

    Purpose To investigate image quality and radiation dose of CT coronary angiography (CTCA) scanned using automatic tube current modulation (ATCM) and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP) respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product. Results Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05); compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001). Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%), 486 (96.4%), and 394 (93.8%) in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%), 472 (93.7%), 378 (90.0%), respectively; those for FBP were 217 (64.6%), 173 (34.3%), 114 (27.1%), respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5%) were higher than those of standard AIDR3D (1182, 93.8%) and FBP (504, 40.0%); the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001). The mean effective radiation dose was (2.55±1.21) mSv. Conclusion

  18. Assessing ECG signal quality indices to discriminate ECGs with artefacts from pathologically different arrhythmic ECGs.

    PubMed

    Daluwatte, C; Johannesen, L; Galeotti, L; Vicente, J; Strauss, D G; Scully, C G

    2016-08-01

    False and non-actionable alarms in critical care can be reduced by developing algorithms which assess the trueness of an arrhythmia alarm from a bedside monitor. Computational approaches that automatically identify artefacts in ECG signals are an important branch of physiological signal processing which tries to address this issue. Signal quality indices (SQIs) derived considering differences between artefacts which occur in ECG signals and normal QRS morphology have the potential to discriminate pathologically different arrhythmic ECG segments as artefacts. Using ECG signals from the PhysioNet/Computing in Cardiology Challenge 2015 training set, we studied previously reported ECG SQIs in the scientific literature to differentiate ECG segments with artefacts from arrhythmic ECG segments. We found that the ability of SQIs to discriminate between ECG artefacts and arrhythmic ECG varies based on arrhythmia type since the pathology of each arrhythmic ECG waveform is different. Therefore, to reduce the risk of SQIs classifying arrhythmic events as noise it is important to validate and test SQIs with databases that include arrhythmias. Arrhythmia specific SQIs may also minimize the risk of misclassifying arrhythmic events as noise.

  19. Assessing ECG signal quality indices to discriminate ECGs with artefacts from pathologically different arrhythmic ECGs.

    PubMed

    Daluwatte, C; Johannesen, L; Galeotti, L; Vicente, J; Strauss, D G; Scully, C G

    2016-08-01

    False and non-actionable alarms in critical care can be reduced by developing algorithms which assess the trueness of an arrhythmia alarm from a bedside monitor. Computational approaches that automatically identify artefacts in ECG signals are an important branch of physiological signal processing which tries to address this issue. Signal quality indices (SQIs) derived considering differences between artefacts which occur in ECG signals and normal QRS morphology have the potential to discriminate pathologically different arrhythmic ECG segments as artefacts. Using ECG signals from the PhysioNet/Computing in Cardiology Challenge 2015 training set, we studied previously reported ECG SQIs in the scientific literature to differentiate ECG segments with artefacts from arrhythmic ECG segments. We found that the ability of SQIs to discriminate between ECG artefacts and arrhythmic ECG varies based on arrhythmia type since the pathology of each arrhythmic ECG waveform is different. Therefore, to reduce the risk of SQIs classifying arrhythmic events as noise it is important to validate and test SQIs with databases that include arrhythmias. Arrhythmia specific SQIs may also minimize the risk of misclassifying arrhythmic events as noise. PMID:27454007

  20. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    SciTech Connect

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-06-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  1. Availability of Structured and Unstructured Clinical Data for Comparative Effectiveness Research and Quality Improvement: A Multisite Assessment

    PubMed Central

    Capurro, Daniel; PhD, Meliha Yetisgen; van Eaton, Erik; Black, Robert; Tarczy-Hornoch, Peter

    2014-01-01

    Introduction: A key attribute of a learning health care system is the ability to collect and analyze routinely collected clinical data in order to quickly generate new clinical evidence, and to monitor the quality of the care provided. To achieve this vision, clinical data must be easy to extract and stored in computer readable formats. We conducted this study across multiple organizations to assess the availability of such data specifically for comparative effectiveness research (CER) and quality improvement (QI) on surgical procedures. Setting: This study was conducted in the context of the data needed for the already established Surgical Care and Outcomes Assessment Program (SCOAP), a clinician-led, performance benchmarking, and QI registry for surgical and interventional procedures in Washington State. Methods: We selected six hospitals, managed by two Health Information Technology (HIT) groups, and assessed the ease of automated extraction of the data required to complete the SCOAP data collection forms. Each data element was classified as easy, moderate, or complex to extract. Results: Overall, a significant proportion of the data required to automatically complete the SCOAP forms was not stored in structured computer-readable formats, with more than 75 percent of all data elements being classified as moderately complex or complex to extract. The distribution differed significantly between the health care systems studied. Conclusions: Although highly desirable, a learning health care system does not automatically emerge from the implementation of electronic health records (EHRs). Innovative methods to improve the structured capture of clinical data are needed to facilitate the use of routinely collected clinical data for patient phenotyping. PMID:25848594

  2. Assessment of Quality of Life of Women with Breast Cancer

    PubMed Central

    Gavric, Zivana; Vukovic-Kostic, Zivana

    2016-01-01

    Background: Breast cancer is the most common type of cancer among women in 145 countries worldwide, and the success of healthcare for women with this disease is measured with the quality of life of survivors. The aim of this study was to examine how the breast cancer affects the quality of life and in what dimension of health quality of life is the least accomplished. Method: A pilot research had been performed in the period from June 10 to August 15 2011, on 100 women from Association of women with breast cancer “Iskra” in Banja Luka, aged 20-75. The survey research was based on the EORTC QLQ-C30 version 3.0 and questionnaire for assessment of quality of life of those suffering from breast cancer QLQ-BR23 with 53 questions in total. Results: The average age of women in research was 51.8 years (±11.23). Statistically important differences (χ24=221.941; p<0.01) are higher mean values of the score for the functional scale, (66.32±17.82) cognitive functions (63.50±28.00) in relation to functional role (46.83±20.88), social (37.00±27.58) and emotional (36.58 ±25.15) functioning. Mean values of the score for the symptoms scale were statistically higher for symptoms such as fatigue, insomnia and pain in relation to other symptoms. Mean values of the score for body image scale are statistically higher in relation to mean values of the score of sexual functions and enjoyment scale, and the scale for grading the future perspectives. Conclusion: Breast cancer affects all the domains of the quality of life, and in our population it is the most prominent in domains of emotional and social functions, as well as role functions. Symptoms of fatigue, insomnia and pain have the most importance influence on these domains. PMID:27157152

  3. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  4. Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset

    NASA Astrophysics Data System (ADS)

    Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.

    2011-12-01

    GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the

  5. Non-invasive assessment of bone quantity and quality in human trabeculae using scanning ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Xia, Yi

    Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in

  6. Using the Baldrige Criteria To Assess Quality in Libraries.

    ERIC Educational Resources Information Center

    Ashar, Hanna; Geiger, Sharon

    1998-01-01

    The Malcolm Baldrige National Quality Award (MBNQA) for quality programs has seven categories against which organizations are evaluated (leadership, information and analysis, strategic/operational quality planning, human resources, process management, quality/performance results, customer/student satisfaction). A preliminary quality-assessment…

  7. Quality of life assessment by community pharmacists: an exploratory study.

    PubMed

    Bentley, J P; Smith, M C; Banahan, B F; Frate, D A; Parks, B R

    1998-02-01

    Implicit in the evolving role of pharmacy is that its practitioners embrace the concept of quality of life (QoL). In recent years there has been an increased interest in incorporating health-related quality of life (HRQoL) measures into clinical practice, primarily focusing on the physician as the user of this information. Pharmacists may be able to use these instruments in their practices to provide better pharmaceutical care. To explore the feasibility of such an undertaking, questionnaires were mailed to a national sample of community pharmacies. In addition to the questionnaire, the respondents were provided with examples of two instruments: the Duke Health Profile and the QOLIE-10. A definition of HRQoL was provided to the respondents. After two mailings and a reminder postcard, a usable response rate of 27.2% was achieved. The results revealed that over 80% of the respondents currently discuss HRQoL issues with their patients. In addition, 66% reported that they attempt to assess the HRQoL of their patients, albeit usually on a subjective, informal basis. After viewing examples of HRQoL instruments, over three-quarters of the respondents reported a willingness to use HRQoL assessment tools in their practices. However, only 53.7% of the respondents were familiar with the concept of HRQoL. Less than 5% reported familiarity with formal instruments. The self-reported knowledge of pharmacists concerning HRQoL was low and the respondents recognized a significant gap between their current knowledge and the level of knowledge needed to assess the HRQoL of their patients formally. The results suggest a possible role for the pharmacist in HRQoL assessment. However, the use of HRQoL instruments in community pharmacies will require further training and education on the part of pharmacists concerning the concept of HRQoL, the issues involved in its measurement and how they can use HRQoL information in their practices. In addition, a number of unanswered questions must be

  8. Assessing Website Pharmacy Drug Quality: Safer Than You Think?

    PubMed Central

    Bate, Roger; Hess, Kimberly

    2010-01-01

    Background Internet-sourced drugs are often considered suspect. The World Health Organization reports that drugs from websites that conceal their physical address are counterfeit in over 50 percent of cases; the U.S. Food and Drug Administration (FDA) works with the National Association of Boards of Pharmacy (NABP) to regularly update a list of websites likely to sell drugs that are illegal or of questionable quality. Methods and Findings This study examines drug purchasing over the Internet, by comparing the sales of five popular drugs from a selection of websites stratified by NABP or other ratings. The drugs were assessed for price, conditions of purchase, and basic quality. Prices and conditions of purchase varied widely. Some websites advertised single pills while others only permitted the purchase of large quantities. Not all websites delivered the exact drugs ordered, some delivered no drugs at all; many websites shipped from multiple international locations, and from locations that were different from those advertised on the websites. All drug samples were tested against approved U.S. brand formulations using Raman spectrometry. Many (17) websites substituted drugs, often in different formulations from the brands requested. These drugs, some of which were probably generics or perhaps non-bioequivalent copy versions, could not be assessed accurately. Of those drugs that could be assessed, none failed from “approved”, “legally compliant” or “not recommended” websites (0 out of 86), whereas 8.6% (3 out of 35) failed from “highly not recommended” and unidentifiable websites. Conclusions Of those drugs that could be assessed, all except Viagra® passed spectrometry testing. Of those that failed, few could be identified either by a country of manufacture listed on the packaging, or by the physical location of the website pharmacy. If confirmed by future studies on other drug samples, then U.S. consumers should be able to reduce their risk by

  9. Quality assessment in nursing home facilities: measuring customer satisfaction.

    PubMed

    Mostyn, M M; Race, K E; Seibert, J H; Johnson, M

    2000-01-01

    A national study designed to assess the reliability and validity of a nursing home customer satisfaction survey is summarized. One hundred fifty-nine facilities participated, each responsible for the distribution and collection of 200 questionnaires randomly sent to the home of the resident's responsible party. A total of 9053 completed questionnaires were returned, for an average adjusted response rate of 53%. The factor analysis identified 4 scales: Comfort and Cleanliness, Nursing, Food Services, and Facility Care and Services, each with high reliability. Based on a multiple regression analysis, the scales were shown to have good criterion-related validity, accounting for 64% of the variance in overall quality ratings. Comparisons based on select characteristics indicated significantly different satisfaction ratings among facilities. The results are interpreted as providing evidence for the construct validity of a multidimensional customer satisfaction scale with measured reliability and criterion-related validity. Moreover, the scale can be used to differentiate satisfaction levels among facilities. PMID:10763218

  10. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  11. Groundwater Quality Assessment for Waste Management Area U: First Determination

    SciTech Connect

    Hodges, Floyd N.; Chou, Charissa J.

    2000-08-04

    As a result of the most recent recalculation one of the indicator parameters, specific conductance, exceeded its background value in downgradient well 299-W19-41, triggering a change from detection monitoring to groundwater quality assessment program. The major contributors to the higher specific conductance are nonhazardous constituents (i.e., sodium, calcium, magnesium, chloride, sulfate, and bicarbonate). Nitrate, chromium, and technetium-99 are present and are increasing; however, they are significantly below their drinking waster standards. Interpretation of groundwater monitoring data indicates that both the nonhazardous constituents causing elevated specific conductance in groundwater and the tank waste constituents present in groundwater at the waste management area are a result of surface water infiltration in the southern portion of the facility. There is evidence for both upgradient and waste management area sources for observed nitrate concentrations. There is no indication of an upgradient source for the observed chromium and technetium-99.

  12. Quality assessment in nursing home facilities: measuring customer satisfaction.

    PubMed

    Mostyn, M M; Race, K E; Seibert, J H; Johnson, M

    2000-01-01

    A national study designed to assess the reliability and validity of a nursing home customer satisfaction survey is summarized. One hundred fifty-nine facilities participated, each responsible for the distribution and collection of 200 questionnaires randomly sent to the home of the resident's responsible party. A total of 9053 completed questionnaires were returned, for an average adjusted response rate of 53%. The factor analysis identified 4 scales: Comfort and Cleanliness, Nursing, Food Services, and Facility Care and Services, each with high reliability. Based on a multiple regression analysis, the scales were shown to have good criterion-related validity, accounting for 64% of the variance in overall quality ratings. Comparisons based on select characteristics indicated significantly different satisfaction ratings among facilities. The results are interpreted as providing evidence for the construct validity of a multidimensional customer satisfaction scale with measured reliability and criterion-related validity. Moreover, the scale can be used to differentiate satisfaction levels among facilities.

  13. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  14. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  15. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  16. Assessing The Policy Relevance of Regional Air Quality Models

    NASA Astrophysics Data System (ADS)

    Holloway, T.

    This work presents a framework for discussing the policy relevance of models, and regional air quality models in particular. We define four criteria: 1) The scientific status of the model; 2) Its ability to address primary environmental concerns; 3) The position of modeled environmental issues on the political agenda; and 4) The role of scientific input into the policy process. This framework is applied to current work simulating the transport of nitric acid in Asia with the ATMOS-N model, to past studies on air pollution transport in Europe with the EMEP model, and to future applications of the United States Environmental Protection Agency (US EPA) Models-3. The Lagrangian EMEP model provided critical input to the development of the 1994 Oslo and 1999 Gothenburg Protocols to the Convention on Long-Range Transbound- ary Air Pollution, as well as to the development of EU directives, via its role as a component of the RAINS integrated assessment model. Our work simulating reactive nitrogen in Asia follows the European example in part, with the choice of ATMOS-N, a regional Lagrangian model to calculate source-receptor relationships for the RAINS- Asia integrated assessment model. However, given differences between ATMOS-N and the EMEP model, as well as differences between the scientific and political cli- mates facing Europe ten years ago and Asia today, the role of these two models in the policy process is very different. We characterize the different aspects of policy relevance between these models using our framework, and consider how the current generation US EPA air quality model compares, in light of its Eulerian structure, dif- ferent objectives, and the policy context of the US.

  17. Drinking water quality assessment in Southern Sindh (Pakistan).

    PubMed

    Memon, Mehrunisa; Soomro, Mohammed Saleh; Akhtar, Mohammad Saleem; Memon, Kazi Suleman

    2011-06-01

    The southern Sindh province of Pakistan adjoins the Arabian Sea coast where drinking water quality is deteriorating due to dumping of industrial and urban waste and use of agrochemicals and yet has limited fresh water resources. The study assessed the drinking water quality of canal, shallow pumps, dug wells, and water supply schemes from the administrative districts of Thatta, Badin, and Thar by measuring physical, chemical, and biological (total coliform) quality parameters. All four water bodies (dug wells, shallow pumps canal water, and water supply schemes) exceeded WHO MPL for turbidity (24%, 28%, 96%, 69%), coliform (96%, 77%, 92%, 81%), and electrical conductivity (100%, 99%, 44%, 63%), respectively. However, the turbidity was lower in underground water, i.e., 24% and 28% in dug wells and shallow pumps as compared to open water, i.e., 96% and 69% in canal and water supply schemes, respectively. In dug wells and shallow pumps, limits for TDS, alkalinity, hardness, and sodium exceeded, respectively, by 63% and 33%; 59% and 70%, 40% and 27%, and 78% and 26%. Sodium was major problem in dug wells and shallow pumps of district Thar and considerable percent in shallow pumps of Badin. Iron was major problem in all water bodies of district Badin ranging from 50% to 69% and to some extent in open waters of Thatta. Other parameters as pH, copper, manganese, zinc, and phosphorus were within standard permissible limits of World Health Organization. Some common diseases found in the study area were gastroenteritis, diarrhea and vomiting, kidney, and skin problems.

  18. Assessing groundwater quality for irrigation using indicator kriging method

    NASA Astrophysics Data System (ADS)

    Delbari, Masoomeh; Amiri, Meysam; Motlagh, Masoud Bahraini

    2014-09-01

    One of the key parameters influencing sprinkler irrigation performance is water quality. In this study, the spatial variability of groundwater quality parameters (EC, SAR, Na+, Cl-, HCO3 - and pH) was investigated by geostatistical methods and the most suitable areas for implementation of sprinkler irrigation systems in terms of water quality are determined. The study was performed in Fasa county of Fars province using 91 water samples. Results indicated that all parameters are moderately to strongly spatially correlated over the study area. The spatial distribution of pH and HCO3 - was mapped using ordinary kriging. The probability of concentrations of EC, SAR, Na+ and Cl- exceeding a threshold limit in groundwater was obtained using indicator kriging (IK). The experimental indicator semivariograms were often fitted well by a spherical model for SAR, EC, Na+ and Cl-. For HCO3 - and pH, an exponential model was fitted to the experimental semivariograms. Probability maps showed that the risk of EC, SAR, Na+ and Cl- exceeding the given critical threshold is higher in lower half of the study area. The most proper agricultural lands for sprinkler irrigation implementation were identified by evaluating all probability maps. The suitable areas for sprinkler irrigation design were determined to be 25,240 hectares, which is about 34 percent of total agricultural lands and are located in northern and eastern parts. Overall the results of this study showed that IK is an appropriate approach for risk assessment of groundwater pollution, which is useful for a proper groundwater resources management.

  19. Inline roasting hyphenated with gas chromatography-mass spectrometry as an innovative approach for assessment of cocoa fermentation quality and aroma formation potential.

    PubMed

    Van Durme, Jim; Ingels, Isabel; De Winne, Ann

    2016-08-15

    Today, the cocoa industry is in great need of faster and robust analytical techniques to objectively assess incoming cocoa quality. In this work, inline roasting hyphenated with a cooled injection system coupled to a gas chromatograph-mass spectrometer (ILR-CIS-GC-MS) has been explored for the first time to assess fermentation quality and/or overall aroma formation potential of cocoa. This innovative approach resulted in the in-situ formation of relevant cocoa aroma compounds. After comparison with data obtained by headspace solid phase micro extraction (HS-SPME-GC-MS) on conventional roasted cocoa beans, ILR-CIS-GC-MS data on unroasted cocoa beans showed similar formation trends of important cocoa aroma markers as a function of fermentation quality. The latter approach only requires small aliquots of unroasted cocoa beans, can be automatated, requires no sample preparation, needs relatively short analytical times (<1h) and is highly reproducible. PMID:27006215

  20. An assessment of bird habitat quality using population growth rates

    USGS Publications Warehouse

    Knutson, M.G.; Powell, L.A.; Hines, R.K.; Friberg, M.A.; Niemi, G.J.

    2006-01-01

    Survival and reproduction directly affect population growth rate (lambda) making lambda a fundamental parameter for assessing habitat quality. We used field data, literature review, and a computer simulation to predict annual productivity and lambda for several species of landbirds breeding in floodplain and upland forests in the Midwestern United States. We monitored 1735 nests of 27 species; 760 nests were in the uplands and 975 were in the floodplain. Each type of forest habitat (upland and floodplain) was a source habitat for some species. Despite a relatively low proportion of regional forest cover, the majority of species had stable or increasing populations in all or some habitats, including six species of conservation concern. In our search for a simple analog for lambda, we found that only adult apparent survival, juvenile survival, and annual productivity were correlated with lambda; daily nest survival and relative abundance estimated from point counts were not. Survival and annual productivity are among the most costly demographic parameters to measure and there does not seem to be a low-cost alternative. In addition, our literature search revealed that the demographic parameters needed to model annual productivity and lambda were unavailable for several species. More collective effort across North America is needed to fill the gaps in our knowledge of demographic parameters necessary to model both annual productivity and lambda. Managers can use habitat-specific predictions of annual productivity to compare habitat quality among species and habitats for purposes of evaluating management plans.

  1. Fuzzy-GA modeling in air quality assessment.

    PubMed

    Yadav, Jyoti; Kharat, Vilas; Deshpande, Ashok

    2015-04-01

    In this paper, the authors have suggested and implemented the defined soft computing methods in air quality classification with case studies. The first study relates to the application of Fuzzy C mean (FCM) clustering method in estimating pollution status in cities of Maharashtra State, India. In this study, the computation of weighting factor using a new concept of reference group is successfully demonstrated. The authors have also investigated the efficacy of fuzzy set theoretic approach in combination with genetic algorithm in straightway describing air quality in linguistic terms with linguistic degree of certainty attached to each description using Zadeh-Deshpande (ZD) approach. Two metropolitan cities viz., Mumbai in India and New York in the USA are identified for the assessment of the pollution status due to their somewhat similar geographical features. The case studies infer that the fuzzy sets drawn on the basis of expert knowledge base for the criteria pollutants are not much different from those obtained using genetic algorithm. Pollution forecast using various methods including fuzzy time series forms an integral part of the paper.

  2. Integrated assessment of brick kiln emission impacts on air quality.

    PubMed

    Le, Hoang Anh; Oanh, Nguyen Thi Kim

    2010-12-01

    This paper presents monitoring results of daily brick kiln stack emission and the derived emission factors. Emission of individual air pollutant varied significantly during a firing batch (7 days) and between kilns. Average emission factors per 1,000 bricks were 6.35-12.3 kg of CO, 0.52-5.9 kg of SO(2) and 0.64-1.4 kg of particulate matter (PM). PM emission size distribution in the stack plume was determined using a modified cascade impactor. Obtained emission factors and PM size distribution data were used in simulation study using the Industrial Source Complex Short-Term (ISCST3) dispersion model. The model performance was successfully evaluated for the local conditions using the simultaneous ambient monitoring data in 2006 and 2007. SO(2) was the most critical pollutant, exceeding the hourly National Ambient Air Quality Standards over 63 km(2) out of the 100-km(2) modelled domain in the base case. Impacts of different emission scenarios on the ambient air quality (SO(2), PM, CO, PM dry deposition flux) were assessed.

  3. Water quality assessment of Ganga river in Bihar Region, India.

    PubMed

    Tiwary, R K; Rajak, G P; Abhishek; Mondal, M R

    2005-10-01

    A study was carried out on Ganga river in Bihar region in and around Patna to assess the impact of sewage pollution on the water quality of the river. The drain water samples from the confluence point of outfall drains to the river were collected and studied for key parameters. Parameters such as BOD, COD, TDS, TSS, total and faecal coliform (MPN) were observed high in drain water. The physicochemical analysis of Ganga river shows that the water has high TDS, TSS, BOD, and COD. The coliform bacteria were found to be alarmingly high in the river. Most of the parameters analyzed were found high near the bank in comparison to the water in the middle stream of that station. The XRF analysis of sediments of the Ganga river showed that Si, Fe, Ca, Al and K are the major elements of the Ganga sediment. The study revealed that due to discharge of untreated sewage into the Ganga, the water quality of Ganga has been severely deteriorated and the potable nature of water is being lost.

  4. Quality assessment of digested sludges produced by advanced stabilization processes.

    PubMed

    Braguglia, C M; Coors, A; Gallipoli, A; Gianico, A; Guillon, E; Kunkel, U; Mascolo, G; Richter, E; Ternes, T A; Tomei, M C; Mininni, G

    2015-05-01

    The European Union (EU) Project Routes aimed to discover new routes in sludge stabilization treatments leading to high-quality digested sludge, suitable for land application. In order to investigate the impact of different enhanced sludge stabilization processes such as (a) thermophilic digestion integrated with thermal hydrolysis pretreatment (TT), (b) sonication before mesophilic/thermophilic digestion (UMT), and (c) sequential anaerobic/aerobic digestion (AA) on digested sludge quality, a broad class of conventional and emerging organic micropollutants as well as ecotoxicity was analyzed, extending the assessment beyond the parameters typically considered (i.e., stability index and heavy metals). The stability index was improved by adding aerobic posttreatment or by operating dual-stage process but not by pretreatment integration. Filterability was worsened by thermophilic digestion, either alone (TT) or coupled with mesophilic digestion (UMT). The concentrations of heavy metals, present in ranking order Zn > Cu > Pb > Cr ~ Ni > Cd > Hg, were always below the current legal requirements for use on land and were not removed during the processes. Removals of conventional and emerging organic pollutants were greatly enhanced by performing double-stage digestion (UMT and AA treatment) compared to a single-stage process as TT; the same trend was found as regards toxicity reduction. Overall, all the digested sludges exhibited toxicity to the soil bacterium Arthrobacter globiformis at concentrations about factor 100 higher than the usual application rate of sludge to soil in Europe. For earthworms, a safety margin of factor 30 was generally achieved for all the digested samples. PMID:24903249

  5. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  6. Water quality assessment of Ganga river in Bihar Region, India.

    PubMed

    Tiwary, R K; Rajak, G P; Abhishek; Mondal, M R

    2005-10-01

    A study was carried out on Ganga river in Bihar region in and around Patna to assess the impact of sewage pollution on the water quality of the river. The drain water samples from the confluence point of outfall drains to the river were collected and studied for key parameters. Parameters such as BOD, COD, TDS, TSS, total and faecal coliform (MPN) were observed high in drain water. The physicochemical analysis of Ganga river shows that the water has high TDS, TSS, BOD, and COD. The coliform bacteria were found to be alarmingly high in the river. Most of the parameters analyzed were found high near the bank in comparison to the water in the middle stream of that station. The XRF analysis of sediments of the Ganga river showed that Si, Fe, Ca, Al and K are the major elements of the Ganga sediment. The study revealed that due to discharge of untreated sewage into the Ganga, the water quality of Ganga has been severely deteriorated and the potable nature of water is being lost. PMID:17051921

  7. Web Page Content and Quality Assessed for Shoulder Replacement.

    PubMed

    Matthews, John R; Harrison, Caitlyn M; Hughes, Travis M; Dezfuli, Bobby; Sheppard, Joseph

    2016-01-01

    The Internet has become a major source for obtaining health-related information. This study assesses and compares the quality of information available online for shoulder replacement using medical (total shoulder arthroplasty [TSA]) and nontechnical (shoulder replacement [SR]) terminology. Three evaluators reviewed 90 websites for each search term across 3 search engines (Google, Yahoo, and Bing). Websites were grouped into categories, identified as commercial or noncommercial, and evaluated with the DISCERN questionnaire. Total shoulder arthroplasty provided 53 unique sites compared to 38 websites for SR. Of the 53 TSA websites, 30% were health professional-oriented websites versus 18% of SR websites. Shoulder replacement websites provided more patient-oriented information at 48%, versus 45% of TSA websites. In total, SR websites provided 47% (42/90) noncommercial websites, with the highest number seen in Yahoo, compared with TSA at 37% (33/90), with Google providing 13 of the 33 websites (39%). Using the nonmedical terminology with Yahoo's search engine returned the most noncommercial and patient-oriented websites. However, the quality of information found online was highly variable, with most websites being unreliable and incomplete, regardless of search term.

  8. Hydrogeochemistry and Water Quality Index in the Assessment of Groundwater Quality for Drinking Uses.

    PubMed

    Batabyal, Asit Kumar; Chakraborty, Surajit

    2015-07-01

    The present investigation is aimed at understanding the hydrogeochemical parameters and development of a water quality index (WQI) to assess groundwater quality of a rural tract in the northwest of Bardhaman district of West Bengal, India. Groundwater occurs at shallow depths with the maximum flow moving southeast during pre-monsoon season and south in post-monsoon period. The physicochemical analysis of groundwater samples shows the major ions in the order of HCO3>Ca>Na>Mg>Cl>SO4 and HCO3>Ca>Mg>Na>Cl>SO4 in pre- and post-monsoon periods, respectively. The groundwater quality is safe for drinking, barring the elevated iron content in certain areas. Based on WQI values, groundwater falls into one of three categories: excellent water, good water, and poor water. The high value of WQI is because of elevated concentration of iron and chloride. The majority of the area is occupied by good water in pre-monsoon and poor water in post-monsoon period.

  9. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    NASA Astrophysics Data System (ADS)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  10. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  11. The Quality of Local District Assessments Used in Nebraska's School-Based Teacher-Led Assessment and Reporting System (STARS)

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2005-01-01

    A sample of 293 local district assessments used in the Nebraska STARS (School-based Teacher-led Assessment and Reporting System), 147 from 2004 district mathematics assessment portfolios and 146 from 2003 reading assessment portfolios, was scored with a rubric evaluating their quality. Scorers were Nebraska educators with background and training…

  12. Indigenous Community Tree Inventory: Assessment of Data Quality

    NASA Astrophysics Data System (ADS)

    Fauzi, M. F.; Idris, N. H.; Din, A. H. M.; Osman, M. J.; Idris, N. H.; Ishak, M. H. I.

    2016-09-01

    The citizen science program to supplement authoritative data in tree inventory has been well implemented in various countries. However, there is a lack of study that assesses correctness and accuracy of tree data supplied by citizens. This paper addresses the issue of tree data quality supplied by semi-literate indigenous group. The aim of this paper is to assess the correctness of attributes (tree species name, height and diameter at breast height) and the accuracy of tree horizontal positioning data supplied by indigenous people. The accuracy of the tree horizontal position recorded by GNSS-enable smart phone was found to have a RMSE value of ± 8m which is not suitable to accurately locate individual tree position in tropical rainforest such as the Royal Belum State Park. Consequently, the tree species names contributed by indigenous people were only 20 to 30 percent correct as compared with the reference data. However, the combination of indigenous respondents comprising of different ages, experience and knowledge working in a group influence less attribute error in data entry and increase the use of free text rather than audio methods. The indigenous community has a big potential to engage with scientific study due to their local knowledge with the research area, however intensive training must be given to empower their skills and several challenges need to be addressed.

  13. Conventional culture for water quality assessment: is there a future?

    PubMed

    Sartory, D P; Watkins, J

    1998-12-01

    Conventional culture for the detection, enumeration and identification of micro-organisms has been the traditional tool of the microbiologist. It is, however, time-consuming and labour-intensive and confirmed results often require several days of analysis. Culture may not grow the organisms being sought and for enumeration may only detect a small proportion of the total population. However, it does have the advantage of being simple to use and relatively inexpensive. It is also a direct means of assessing cell viability. Novel fluorogenic dyes and fluorgenic and chromogenic substrates have overcome some of these problems by providing a means of rapid and specific detection and enumeration whilst removing the need for subculture and confirmation tests. Immunological tests such as ELISA have significantly reduced analysis time by providing specific target organism detection. Molecular techniques have removed the need for culture. Improvements in sensitivity, and removal of the inhibitory nature of sample matrices, have allowed analysts to detect low levels of micro-organisms but the questions of viability and comparability with cultural techniques still remain. Are we about to see a change of culture in water quality assessment, or can cultural techniques be developed that reduce analysis time to a few hours and can rapid methods be used for detecting the presence and viability of organisms?

  14. 78 FR 42928 - Draft Environmental Assessment for the Cotton Quality Research Station Land Transfer

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    ...; ] DEPARTMENT OF AGRICULTURE Agricultural Research Service Draft Environmental Assessment for the Cotton Quality... Environmental Assessment for the Cotton Quality Research Station Land Transfer. SUMMARY: In accordance with the... facilities at the Cotton Quality Research Station (CQRS) from the USDA Agricultural Research Service (ARS)...

  15. 42 CFR 460.134 - Minimum requirements for quality assessment and performance improvement program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Quality of life of participants. (4) Effectiveness and safety of staff-provided and contracted services... ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance... 42 Public Health 4 2010-10-01 2010-10-01 false Minimum requirements for quality assessment...

  16. 42 CFR 460.134 - Minimum requirements for quality assessment and performance improvement program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Quality of life of participants. (4) Effectiveness and safety of staff-provided and contracted services... ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance... 42 Public Health 4 2012-10-01 2012-10-01 false Minimum requirements for quality assessment...

  17. 42 CFR 460.134 - Minimum requirements for quality assessment and performance improvement program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Quality of life of participants. (4) Effectiveness and safety of staff-provided and contracted services... ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance... 42 Public Health 4 2014-10-01 2014-10-01 false Minimum requirements for quality assessment...

  18. 42 CFR 460.134 - Minimum requirements for quality assessment and performance improvement program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Quality of life of participants. (4) Effectiveness and safety of staff-provided and contracted services... ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance... 42 Public Health 4 2011-10-01 2011-10-01 false Minimum requirements for quality assessment...

  19. 42 CFR 460.134 - Minimum requirements for quality assessment and performance improvement program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Quality of life of participants. (4) Effectiveness and safety of staff-provided and contracted services... ELDERLY (PACE) PROGRAMS OF ALL-INCLUSIVE CARE FOR THE ELDERLY (PACE) Quality Assessment and Performance... 42 Public Health 4 2013-10-01 2013-10-01 false Minimum requirements for quality assessment...

  20. A Review of Data Quality Assessment Methods for Public Health Information Systems

    PubMed Central

    Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping

    2014-01-01

    High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450