Sample records for automatic quality checks

  1. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.

  2. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  3. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covington, E; Younge, K; Chen, X

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One examplemore » is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.« less

  5. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  6. A model for calculating the costs of in vivo dosimetry and portal imaging in radiotherapy departments.

    PubMed

    Kesteloot, K; Dutreix, A; van der Schueren, E

    1993-08-01

    The costs of in vivo dosimetry and portal imaging in radiotherapy are estimated, on the basis of a detailed overview of the activities involved in both quality assurance techniques. These activities require the availability of equipment, the use of material and workload. The cost calculations allow to conclude that for most departments in vivo dosimetry with diodes will be a cheaper alternative than in vivo dosimetry with TLD-meters. Whether TLD measurements can be performed cheaper with an automatic reader (with a higher equipment cost, but lower workload) or with a semi-automatic reader (lower equipment cost, but higher workload), depends on the number of checks in the department. LSP-systems (with a very high equipment cost) as well as on-line imaging systems will be cheaper portal imaging techniques than conventional port films (with high material costs) for large departments, or for smaller departments that perform frequent volume checks.

  7. User's manual for computer program BASEPLOT

    USGS Publications Warehouse

    Sanders, Curtis L.

    2002-01-01

    The checking and reviewing of daily records of streamflow within the U.S. Geological Survey is traditionally accomplished by hand-plotting and mentally collating tables of data. The process is time consuming, difficult to standardize, and subject to errors in computation, data entry, and logic. In addition, the presentation of flow data on the internet requires more timely and accurate computation of daily flow records. BASEPLOT was developed for checking and review of primary streamflow records within the U.S. Geological Survey. Use of BASEPLOT enables users to (1) provide efficiencies during the record checking and review process, (2) improve quality control, (3) achieve uniformity of checking and review techniques of simple stage-discharge relations, and (4) provide a tool for teaching streamflow computation techniques. The BASEPLOT program produces tables of quality control checks and produces plots of rating curves and discharge measurements; variable shift (V-shift) diagrams; and V-shifts converted to stage-discharge plots, using data stored in the U.S. Geological Survey Automatic Data Processing System database. In addition, the program plots unit-value hydrographs that show unit-value stages, shifts, and datum corrections; input shifts, datum corrections, and effective dates; discharge measurements; effective dates for rating tables; and numeric quality control checks. Checklist/tutorial forms are provided for reviewers to ensure completeness of review and standardize the review process. The program was written for the U.S. Geological Survey SUN computer using the Statistical Analysis System (SAS) software produced by SAS Institute, Incorporated.

  8. TU-FG-201-03: Automatic Pre-Delivery Verification Using Statistical Analysis of Consistencies in Treatment Plan Parameters by the Treatment Site and Modality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, S; Wu, Y; Chang, X

    Purpose: A novel computer software system, namely APDV (Automatic Pre-Delivery Verification), has been developed for verifying patient treatment plan parameters right prior to treatment deliveries in order to automatically detect and prevent catastrophic errors. Methods: APDV is designed to continuously monitor new DICOM plan files on the TMS computer at the treatment console. When new plans to be delivered are detected, APDV checks the consistencies of plan parameters and high-level plan statistics using underlying rules and statistical properties based on given treatment site, technique and modality. These rules were quantitatively derived by retrospectively analyzing all the EBRT treatment plans ofmore » the past 8 years at authors’ institution. Therapists and physicists will be notified with a warning message displayed on the TMS computer if any critical errors are detected, and check results, confirmation, together with dismissal actions will be saved into database for further review. Results: APDV was implemented as a stand-alone program using C# to ensure required real time performance. Mean values and standard deviations were quantitatively derived for various plan parameters including MLC usage, MU/cGy radio, beam SSD, beam weighting, and the beam gantry angles (only for lateral targets) per treatment site, technique and modality. 2D-based rules of combined MU/cGy ratio and averaged SSD values were also derived using joint probabilities of confidence error ellipses. The statistics of these major treatment plan parameters quantitatively evaluate the consistency of any treatment plans which facilitates automatic APDV checking procedures. Conclusion: APDV could be useful in detecting and preventing catastrophic errors immediately before treatment deliveries. Future plan including automatic patient identify and patient setup checks after patient daily images are acquired by the machine and become available on the TMS computer. This project is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less

  9. Automatic Review of Abstract State Machines by Meta Property Verification

    NASA Technical Reports Server (NTRS)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  10. SU-E-J-199: A Software Tool for Quality Assurance of Online Replanning with MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Ahunbay, E; Li, X

    2015-06-15

    Purpose: To develop a quality assurance software tool, ArtQA, capable of automatically checking radiation treatment plan parameters, verifying plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary MU calculation considering the effect of magnetic field from MR-Linac, and verifying the delivery and plan consistency, for online replanning. Methods: ArtQA was developed by creating interfaces to TPS (e.g., Monaco, Elekta), R&V system (Mosaiq, Elekta), and secondary MU calculation system. The tool obtains plan parameters from the TPS via direct file reading, and retrieves plan data both transferred from TPS and recorded during themore » actual delivery in the R&V system database via open database connectivity and structured query language. By comparing beam/plan datasets in different systems, ArtQA detects and outputs discrepancies between TPS, R&V system and secondary MU calculation system, and delivery. To consider the effect of 1.5T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA is capable of automatically checking plan integrity and logic consistency, detecting plan data transfer errors, performing secondary MU calculations with or without a transverse magnetic field, and verifying treatment delivery. The tool is efficient and effective for pre- and post-treatment QA checks of all available treatment parameters that may be impractical with the commonly-used visual inspection. Conclusion: The software tool ArtQA can be used for quick and automatic pre- and post-treatment QA check, eliminating human error associated with visual inspection. While this tool is developed for online replanning to be used on MR-Linac, where the QA needs to be performed rapidly as the patient is lying on the table waiting for the treatment, ArtQA can be used as a general QA tool in radiation oncology practice. This work is partially supported by Elekta Inc.« less

  11. Revisiting the Procedures for the Vector Data Quality Assurance in Practice

    NASA Astrophysics Data System (ADS)

    Erdoğan, M.; Torun, A.; Boyacı, D.

    2012-07-01

    Immense use of topographical data in spatial data visualization, business GIS (Geographic Information Systems) solutions and applications, mobile and location-based services forced the topo-data providers to create standard, up-to-date and complete data sets in a sustainable frame. Data quality has been studied and researched for more than two decades. There have been un-countable numbers of references on its semantics, its conceptual logical and representations and many applications on spatial databases and GIS. However, there is a gap between research and practice in the sense of spatial data quality which increases the costs and decreases the efficiency of data production. Spatial data quality is well-known by academia and industry but usually in different context. The research on spatial data quality stated several issues having practical use such as descriptive information, metadata, fulfillment of spatial relationships among data, integrity measures, geometric constraints etc. The industry and data producers realize them in three stages; pre-, co- and post data capturing. The pre-data capturing stage covers semantic modelling, data definition, cataloguing, modelling, data dictionary and schema creation processes. The co-data capturing stage covers general rules of spatial relationships, data and model specific rules such as topologic and model building relationships, geometric threshold, data extraction guidelines, object-object, object-belonging class, object-non-belonging class, class-class relationships to be taken into account during data capturing. And post-data capturing stage covers specified QC (quality check) benchmarks and checking compliance to general and specific rules. The vector data quality criteria are different from the views of producers and users. But these criteria are generally driven by the needs, expectations and feedbacks of the users. This paper presents a practical method which closes the gap between theory and practice. Development of spatial data quality concepts into developments and application requires existence of conceptual, logical and most importantly physical existence of data model, rules and knowledge of realization in a form of geo-spatial data. The applicable metrics and thresholds are determined on this concrete base. This study discusses application of geo-spatial data quality issues and QA (quality assurance) and QC procedures in the topographic data production. Firstly we introduce MGCP (Multinational Geospatial Co-production Program) data profile of NATO (North Atlantic Treaty Organization) DFDD (DGIWG Feature Data Dictionary), the requirements of data owner, the view of data producers for both data capturing and QC and finally QA to fulfil user needs. Then, our practical and new approach which divides the quality into three phases is introduced. Finally, implementation of our approach to accomplish metrics, measures and thresholds of quality definitions is discussed. In this paper, especially geometry and semantics quality and quality control procedures that can be performed by the producers are discussed. Some applicable best-practices that we experienced on techniques of quality control, defining regulations that define the objectives and data production procedures are given in the final remarks. These quality control procedures should include the visual checks over the source data, captured vector data and printouts, some automatic checks that can be performed by software and some semi-automatic checks by the interaction with quality control personnel. Finally, these quality control procedures should ensure the geometric, semantic, attribution and metadata quality of vector data.

  12. A preliminary study into performing routine tube output and automatic exposure control quality assurance using radiology information system data.

    PubMed

    Charnock, P; Jones, R; Fazakerley, J; Wilde, R; Dunn, A F

    2011-09-01

    Data are currently being collected from hospital radiology information systems in the North West of the UK for the purposes of both clinical audit and patient dose audit. Could these data also be used to satisfy quality assurance (QA) requirements according to UK guidance? From 2008 to 2009, 731 653 records were submitted from 8 hospitals from the North West England. For automatic exposure control QA, the protocol from Institute of Physics and Engineering in Medicine (IPEM) report 91 recommends that milliamperes per second can be monitored for repeatability and reproducibility using a suitable phantom, at 70-81 kV. Abdomen AP and chest PA examinations were analysed to find the most common kilovoltage used with these records then used to plot average monthly milliamperes per second with time. IPEM report 91 also recommends that a range of commonly used clinical settings is used to check output reproducibility and repeatability. For each tube, the dose area product values were plotted over time for two most common exposure factor sets. Results show that it is possible to do performance checks of AEC systems; however more work is required to be able to monitor tube output performance. Procedurally, the management system requires work and the benefits to the workflow would need to be demonstrated.

  13. MOM: A meteorological data checking expert system in CLIPS

    NASA Technical Reports Server (NTRS)

    Odonnell, Richard

    1990-01-01

    Meteorologists have long faced the problem of verifying the data they use. Experience shows that there is a sizable number of errors in the data reported by meteorological observers. This is unacceptable for computer forecast models, which depend on accurate data for accurate results. Most errors that occur in meteorological data are obvious to the meteorologist, but time constraints prevent hand-checking. For this reason, it is necessary to have a 'front end' to the computer model to ensure the accuracy of input. Various approaches to automatic data quality control have been developed by several groups. MOM is a rule-based system implemented in CLIPS and utilizing 'consistency checks' and 'range checks'. The system is generic in the sense that it knows some meteorological principles, regardless of specific station characteristics. Specific constraints kept as CLIPS facts in a separate file provide for system flexibility. Preliminary results show that the expert system has detected some inconsistencies not noticed by a local expert.

  14. AutoLock: a semiautomated system for radiotherapy treatment plan quality control

    PubMed Central

    Lowe, Matthew; Hardy, Mark J.; Boylan, Christopher J.; Whitehurst, Philip; Rowbottom, Carl G.

    2015-01-01

    A semiautomated system for radiotherapy treatment plan quality control (QC), named AutoLock, is presented. AutoLock is designed to augment treatment plan QC by automatically checking aspects of treatment plans that are well suited to computational evaluation, whilst summarizing more subjective aspects in the form of a checklist. The treatment plan must pass all automated checks and all checklist items must be acknowledged by the planner as correct before the plan is finalized. Thus AutoLock uniquely integrates automated treatment plan QC, an electronic checklist, and plan finalization. In addition to reducing the potential for the propagation of errors, the integration of AutoLock into the plan finalization workflow has improved efficiency at our center. Detailed audit data are presented, demonstrating that the treatment plan QC rejection rate fell by around a third following the clinical introduction of AutoLock. PACS number: 87.55.Qr PMID:26103498

  15. AutoLock: a semiautomated system for radiotherapy treatment plan quality control.

    PubMed

    Dewhurst, Joseph M; Lowe, Matthew; Hardy, Mark J; Boylan, Christopher J; Whitehurst, Philip; Rowbottom, Carl G

    2015-05-08

    A semiautomated system for radiotherapy treatment plan quality control (QC), named AutoLock, is presented. AutoLock is designed to augment treatment plan QC by automatically checking aspects of treatment plans that are well suited to computational evaluation, whilst summarizing more subjective aspects in the form of a checklist. The treatment plan must pass all automated checks and all checklist items must be acknowledged by the planner as correct before the plan is finalized. Thus AutoLock uniquely integrates automated treatment plan QC, an electronic checklist, and plan finalization. In addition to reducing the potential for the propagation of errors, the integration of AutoLock into the plan finalization workflow has improved efficiency at our center. Detailed audit data are presented, demonstrating that the treatment plan QC rejection rate fell by around a third following the clinical introduction of AutoLock.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaks, D; Fletcher, R; Salamon, S

    Purpose: To develop an online framework that tracks a patient’s plan from initial simulation to treatment and that helps automate elements of the physics plan checks usually performed in the record and verify (RV) system and treatment planning system. Methods: We have developed PlanTracker, an online plan tracking system that automatically imports new patients tasks and follows it through treatment planning, physics checks, therapy check, and chart rounds. A survey was designed to collect information about the amount of time spent by medical physicists in non-physics related tasks. We then assessed these non-physics tasks for automation. Using these surveys, wemore » directed our PlanTracker software development towards the automation of intra-plan physics review. We then conducted a systematic evaluation of PlanTracker’s accuracy by generating test plans in the RV system software designed to mimic real plans, in order to test its efficacy in catching errors both real and theoretical. Results: PlanTracker has proven to be an effective improvement to the clinical workflow in a radiotherapy clinic. We present data indicating that roughly 1/3 of the physics plan check can be automated, and the workflow optimized, and show the functionality of PlanTracker. When the full system is in clinical use we will present data on improvement of time use in comparison to survey data prior to PlanTracker implementation. Conclusion: We have developed a framework for plan tracking and automatic checks in radiation therapy. We anticipate using PlanTracker as a basis for further development in clinical/research software. We hope that by eliminating the most simple and time consuming checks, medical physicists may be able to spend their time on plan quality and other physics tasks rather than in arithmetic and logic checks. We see this development as part of a broader initiative to advance the clinical/research informatics infrastructure surrounding the radiotherapy clinic. This research project has been financially supported by Varian Medical Systems, Palo Alto, CA, through a Varian MRA.« less

  17. Argo workstation: a key component of operational oceanography

    NASA Astrophysics Data System (ADS)

    Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang

    2018-02-01

    Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.

  18. A Revised Earthquake Catalogue for South Iceland

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Zechar, J. Douglas; Vogfjörd, Kristín S.; Eberhard, David A. J.

    2016-01-01

    In 1991, a new seismic monitoring network named SIL was started in Iceland with a digital seismic system and automatic operation. The system is equipped with software that reports the automatic location and magnitude of earthquakes, usually within 1-2 min of their occurrence. Normally, automatic locations are manually checked and re-estimated with corrected phase picks, but locations are subject to random errors and systematic biases. In this article, we consider the quality of the catalogue and produce a revised catalogue for South Iceland, the area with the highest seismic risk in Iceland. We explore the effects of filtering events using some common recommendations based on network geometry and station spacing and, as an alternative, filtering based on a multivariate analysis that identifies outliers in the hypocentre error distribution. We identify and remove quarry blasts, and we re-estimate the magnitude of many events. This revised catalogue which we consider to be filtered, cleaned, and corrected should be valuable for building future seismicity models and for assessing seismic hazard and risk. We present a comparative seismicity analysis using the original and revised catalogues: we report characteristics of South Iceland seismicity in terms of b value and magnitude of completeness. Our work demonstrates the importance of carefully checking an earthquake catalogue before proceeding with seismicity analysis.

  19. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    PubMed Central

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  20. PLC based automatic control of pasteurize mix in ice cream production

    NASA Astrophysics Data System (ADS)

    Yao, Xudong; Liang, Kai

    2013-03-01

    This paper describes the automatic control device of pasteurized mix in the ice cream production process.We design a scheme of control system using FBD program language and develop the programmer in the STEP 7-Micro/WIN software, check for any bugs before downloading into PLC .These developed devices will able to provide flexibility and accuracy to control the step of pasteurized mix. The operator just Input the duration and temperature of pasteurized mix through control panel. All the steps will finish automatically without any intervention in a preprogrammed sequence stored in programmable logic controller (PLC). With the help of this equipment we not only can control the quality of ice cream for various conditions, but also can simplify the production process. This control system is inexpensive and can be widely used in ice cream production industry.

  1. Visualization and automatic detection of defect distribution in GaN atomic structure from sampling Moiré phase.

    PubMed

    Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi; Kodera, Masako; Suguro, Kyoichi; Miyashita, Naoto

    2017-09-19

    Quantitative detection of defects in atomic structures is of great significance to evaluating product quality and exploring quality improvement process. In this study, a Fourier transform filtered sampling Moire technique was proposed to visualize and detect defects in atomic arrays in a large field of view. Defect distributions, defect numbers and defect densities could be visually and quantitatively determined from a single atomic structure image at low cost. The effectiveness of the proposed technique was verified from numerical simulations. As an application, the dislocation distributions in a GaN/AlGaN atomic structure in two directions were magnified and displayed in Moire phase maps, and defect locations and densities were detected automatically. The proposed technique is able to provide valuable references to material scientists and engineers by checking the effect of various treatments for defect reduction. © 2017 IOP Publishing Ltd.

  2. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  3. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  4. An Algorithm for Automatic Checking of Exercises in a Dynamic Geometry System: iGeom

    ERIC Educational Resources Information Center

    Isotani, Seiji; de Oliveira Brandao, Leonidas

    2008-01-01

    One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil.…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Yan, Y; Hager, F

    Purpose: Radiation therapy has evolved to become not only more precise and potent, but also more complicated to monitor and deliver. More rigorous and comprehensive quality assurance is needed to safeguard ever advancing radiation therapy. ICRU standards dictate that an ever growing set of treatment parameters are manually checked weekly by medical physicists. This “weekly chart check” procedure is laborious and subject to human errors or other factors. A computer-assisted chart checking process will enable more complete and accurate human review of critical parameters, reduce the risk of medical errors, and improve the efficiency. Methods: We developed a web-based softwaremore » system that enables a thorough weekly quality assurance checks. In the backend, the software retrieves all machine parameters from a Treatment Management System (TMS) and compares them against the corresponding ones from the treatment planning system. They are also checked for validity against preset rules. The results are displayed as a web page in the front-end for physicists to review. Then a summary report is generated and uploaded automatically to the TMS as a record for weekly chart checking. Results: The software system has been deployed on a web server in our department’s intranet, and has been tested thoroughly by our clinical physicists. A plan parameter would be highlighted when it is off the preset limit. The developed system has changed the way of checking charts with significantly improved accuracy, efficiency, and completeness. It has been shown to be robust, fast, and easy to use. Conclusion: A computer-assisted system has been developed for efficient, accurate, and comprehensive weekly chart checking. The system has been extensively validated and is being implemented for routine clinical use.« less

  6. Automatic Classification of Station Quality by Image Based Pattern Recognition of Ppsd Plots

    NASA Astrophysics Data System (ADS)

    Weber, B.; Herrnkind, S.

    2017-12-01

    The number of seismic stations is growing and it became common practice to share station waveform data in real-time with the main data centers as IRIS, GEOFON, ORFEUS and RESIF. This made analyzing station performance of increasing importance for automatic real-time processing and station selection. The value of a station depends on different factors as quality and quantity of the data, location of the site and general station density in the surrounding area and finally the type of application it can be used for. The approach described by McNamara and Boaz (2006) became standard in the last decade. It incorporates a probability density function (PDF) to display the distribution of seismic power spectral density (PSD). The low noise model (LNM) and high noise model (HNM) introduced by Peterson (1993) are also displayed in the PPSD plots introduced by McNamara and Boaz allowing an estimation of the station quality. Here we describe how we established an automatic station quality classification module using image based pattern recognition on PPSD plots. The plots were split into 4 bands: short-period characteristics (0.1-0.8 s), body wave characteristics (0.8-5 s), microseismic characteristics (5-12 s) and long-period characteristics (12-100 s). The module sqeval connects to a SeedLink server, checks available stations, requests PPSD plots through the Mustang service from IRIS or PQLX/SQLX or from GIS (gempa Image Server), a module to generate different kind of images as trace plots, map plots, helicorder plots or PPSD plots. It compares the image based quality patterns for the different period bands with the retrieved PPSD plot. The quality of a station is divided into 5 classes for each of the 4 bands. Classes A, B, C, D define regular quality between LNM and HNM while the fifth class represents out of order stations with gain problems, missing data etc. Over all period bands about 100 different patterns are required to classify most of the stations available on the IRIS server. The results are written to a file and stations can be filtered by quality. AAAA represents the best quality in all 4 bands. Also a differentiation between instrument types as broad band and short period stations is possible. A regular check using the IRIS SeedLink and Mustang service allow users to be informed about new stations with a specific quality.

  7. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    NASA Astrophysics Data System (ADS)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  8. [Design and implementation of data checking system for Chinese materia medica resources survey].

    PubMed

    Wang, Hui; Zhang, Xiao-Bo; Ge, Xiao-Guang; Jin, Yan; Jing, Zhi-Xian; Qi, Yuan-Hua; Wang, Ling; Zhao, Yu-Ping; Wang, Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    The Chinese material medica resources (CMMR) national survey information management system has collected a large amount of data. To help dealing with data recheck, reduce the work of inside, improve the recheck of survey data from provincial and county level, National Resource Center for Chinese Materia Medical has designed a data checking system for Chinese material medica resources survey based on J2EE technology, Java language, Oracle data base in accordance with the SOA framework. It includes single data check, check score, content manage, check the survey data census data with manual checking and automatic checking about census implementation plan, key research information, general survey information, cultivation of medicinal materials information, germplasm resources information the medicine information, market research information, traditional knowledge information, specimen information of this 9 aspects 20 class 175 indicators in two aspects of the quantity and quality. The established system assists in the completion of the data consistency and accuracy, pushes the county survey team timely to complete the data entry arrangement work, so as to improve the integrity, consistency and accuracy of the survey data, and ensure effective and available data, which lay a foundation for providing accurate data support for national survey of the Chinese material medica resources (CMMR) results summary, and displaying results and sharing. Copyright© by the Chinese Pharmaceutical Association.

  9. Incorporating Digisonde Traces into the Ionospheric Data Assimilation Three Dimensional (IDA3D) Algorithm

    DTIC Science & Technology

    2006-05-11

    examined. These data were processed by the Automatic Real Time Ionogram Scaler with True Height ( ARTIST ) [Reinisch and Huang, 1983] program into electron...IDA3D. The data is locally available and previously quality checked. In addition, IDA3D maps using ARTIST -calculated profiles from hand scaled...ionograms are available for comparison. The first test run of the IDA3D used only O-mode autoscaled virtual height profiles from five different digisondes

  10. An automatic tsunami warning system: TREMORS application in Europe

    NASA Astrophysics Data System (ADS)

    Reymond, D.; Robert, S.; Thomas, Y.; Schindelé, F.

    1996-03-01

    An integrated system named TREMORS (Tsunami Risk Evaluation through seismic Moment of a Real-time System) has been installed in EVORA station, in Portugal which has been affected by historical tsunamis. The system is based on a three component long period seismic station linked to a compatible IBM_PC with a specific software. The goals of this system are the followings: detect earthquake, locate them, compute their seismic moment, give a seismic warning. The warnings are based on the seismic moment estimation and all the processing are made automatically. The finality of this study is to check the quality of estimation of the main parameters of interest in a goal of tsunami warning: the location which depends of azimuth and distance, and at last the seismic moment, M 0, which controls the earthquake size. The sine qua non condition for obtaining an automatic location is that the 3 main seismic phases P, S, R must be visible. This study gives satisfying results (automatic analysis): ± 5° errors in azimuth and epicentral distance, and a standard deviation of less than a factor 2 for the seismic moment M 0.

  11. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  12. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  13. Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing

    NASA Astrophysics Data System (ADS)

    LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.

    2017-12-01

    With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.

  14. Automatic Detection of Mitosis and Nuclei From Cytogenetic Images by CellProfiler Software for Mitotic Index Estimation.

    PubMed

    González, Jorge Ernesto; Radl, Analía; Romero, Ivonne; Barquinero, Joan Francesc; García, Omar; Di Giorgio, Marina

    2016-12-01

    Mitotic Index (MI) estimation expressed as percentage of mitosis plays an important role as quality control endpoint. To this end, MI is applied to check the lot of media and reagents to be used throughout the assay and also to check cellular viability after blood sample shipping, indicating satisfactory/unsatisfactory conditions for the progression of cell culture. The objective of this paper was to apply the CellProfiler open-source software for automatic detection of mitotic and nuclei figures from digitized images of cultured human lymphocytes for MI assessment, and to compare its performance to that performed through semi-automatic and visual detection. Lymphocytes were irradiated and cultured for mitosis detection. Sets of images from cultures were analyzed visually and findings were compared with those using CellProfiler software. The CellProfiler pipeline includes the detection of nuclei and mitosis with 80% sensitivity and more than 99% specificity. We conclude that CellProfiler is a reliable tool for counting mitosis and nuclei from cytogenetic images, saves considerable time compared to manual operation and reduces the variability derived from the scoring criteria of different scorers. The CellProfiler automated pipeline achieves good agreement with visual counting workflow, i.e. it allows fully automated mitotic and nuclei scoring in cytogenetic images yielding reliable information with minimal user intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. 40 CFR 85.2232 - Calibrations, adjustments-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... checks. Within one hour prior to a test, the analyzers shall be zeroed and spanned. Ambient air is acceptable as a zero gas; an electrical span check is acceptable. Zero and span checks shall be made on the lowest range capable of reading the short test standard. Analyzers that perform an automatic zero/span...

  16. Creation of a master table for checking indication and contraindication of medicine from a knowledge base linked with a thesaurus.

    PubMed

    Ji, Shanmei; Matsumura, Yasushi; Kuwata, Shigeki; Nakano, Hirohiko; Chen, Yufeng; Teratani, Tadamasa; Zhang, Qiyan; Mineno, Takahiro; Takeda, Hiroshi

    2004-12-01

    To develop a system for checking indication and contraindication of medicines in prescription order entry system, a master table consisting of the disease names corresponding to the medicines adopted in a hospital is needed. The creation of this table requires a considerable manpower. We developed a Web-based system for constructing a medicine/disease thesaurus and a knowledge base. By authority management of users, this system enables many specialists to create the thesaurus collaboratively without confusion. It supports the creation of a knowledge base using concept names by referring to the thesaurus, which is automatically converted to the check master table. When a disease name or medicine name was added to the thesaurus, the check table was automatically updated. We constructed a thesaurus and a knowledge base in the field of circulatory system disease. The knowledge base linked with the thesaurus proved to be efficient for making the check master table for indication/contraindication of medicines.

  17. PDB data curation.

    PubMed

    Wang, Yanchao; Sunderraman, Rajshekhar

    2006-01-01

    In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.

  18. Serologic test systems development. Progress report, July 1, 1976--September 30, 1977

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, G.C.; Clinard, E.H.; Bartlett, M.L.

    1978-01-01

    Work has continued on the development and application of the Enzyme-Labeled Antibody (ELA) test to the USDA needs. Results on trichinosis, brucellosis, and staphylococcal enterotoxin A detection are very encouraging. A field test for trichinosis detection is being worked out in cooperation with Food Safety and Quality Service personnel. Work is in progress with the Technicon Instrument Corporation to develop a modification of their equipment to automatically process samples by the ELA procedure. An automated ELA readout instrument for 96-well trays has been completed and is being checked out.

  19. Clinical Decision Support System to Enhance Quality Control of Spirometry Using Information and Communication Technologies

    PubMed Central

    2014-01-01

    Background We recently demonstrated that quality of spirometry in primary care could markedly improve with remote offline support from specialized professionals. It is hypothesized that implementation of automatic online assessment of quality of spirometry using information and communication technologies may significantly enhance the potential for extensive deployment of a high quality spirometry program in integrated care settings. Objective The objective of the study was to elaborate and validate a Clinical Decision Support System (CDSS) for automatic online quality assessment of spirometry. Methods The CDSS was done through a three step process including: (1) identification of optimal sampling frequency; (2) iterations to build-up an initial version using the 24 standard spirometry curves recommended by the American Thoracic Society; and (3) iterations to refine the CDSS using 270 curves from 90 patients. In each of these steps the results were checked against one expert. Finally, 778 spirometry curves from 291 patients were analyzed for validation purposes. Results The CDSS generated appropriate online classification and certification in 685/778 (88.1%) of spirometry testing, with 96% sensitivity and 95% specificity. Conclusions Consequently, only 93/778 (11.9%) of spirometry testing required offline remote classification by an expert, indicating a potential positive role of the CDSS in the deployment of a high quality spirometry program in an integrated care setting. PMID:25600957

  20. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Yang, D

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less

  1. 46 CFR 61.30-20 - Automatic control and safety tests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Automatic control and safety tests. 61.30-20 Section 61... TESTS AND INSPECTIONS Tests and Inspections of Fired Thermal Fluid Heaters § 61.30-20 Automatic control and safety tests. Operational tests and checks of all safety and limit controls, combustion controls...

  2. TU-AB-201-02: An Automated Treatment Plan Quality Assurance Program for Tandem and Ovoid High Dose-Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, J; Shi, F; Hrycushko, B

    2015-06-15

    Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less

  3. Cycle time reduction by Html report in mask checking flow

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Cheng; Lu, Min-Ying; Fang, Xiang; Shen, Ming-Feng; Ma, Shou-Yuan; Yang, Chuen-Huei; Tsai, Joe; Lee, Rachel; Deng, Erwin; Lin, Ling-Chieh; Liao, Hung-Yueh; Tsai, Jenny; Bowhill, Amanda; Vu, Hien; Russell, Gordon

    2017-07-01

    The Mask Data Correctness Check (MDCC) is a reticle-level, multi-layer DRC-like check evolved from mask rule check (MRC). The MDCC uses extended job deck (EJB) to achieve mask composition and to perform a detailed check for positioning and integrity of each component of the reticle. Different design patterns on the mask will be mapped to different layers. Therefore, users may be able to review the whole reticle and check the interactions between different designs before the final mask pattern file is available. However, many types of MDCC check results, such as errors from overlapping patterns usually have very large and complex-shaped highlighted areas covering the boundary of the design. Users have to load the result OASIS file and overlap it to the original database that was assembled in MDCC process on a layout viewer, then search for the details of the check results. We introduce a quick result-reviewing method based on an html format report generated by Calibre® RVE. In the report generation process, we analyze and extract the essential part of result OASIS file to a result database (RDB) file by standard verification rule format (SVRF) commands. Calibre® RVE automatically loads the assembled reticle pattern and generates screen shots of these check results. All the processes are automatically triggered just after the MDCC process finishes. Users just have to open the html report to get the information they need: for example, check summary, captured images of results and their coordinates.

  4. Mobile Genome Express (MGE): A comprehensive automatic genetic analyses pipeline with a mobile device.

    PubMed

    Yoon, Jun-Hee; Kim, Thomas W; Mendez, Pedro; Jablons, David M; Kim, Il-Jin

    2017-01-01

    The development of next-generation sequencing (NGS) technology allows to sequence whole exomes or genome. However, data analysis is still the biggest bottleneck for its wide implementation. Most laboratories still depend on manual procedures for data handling and analyses, which translates into a delay and decreased efficiency in the delivery of NGS results to doctors and patients. Thus, there is high demand for developing an automatic and an easy-to-use NGS data analyses system. We developed comprehensive, automatic genetic analyses controller named Mobile Genome Express (MGE) that works in smartphones or other mobile devices. MGE can handle all the steps for genetic analyses, such as: sample information submission, sequencing run quality check from the sequencer, secured data transfer and results review. We sequenced an Actrometrix control DNA containing multiple proven human mutations using a targeted sequencing panel, and the whole analysis was managed by MGE, and its data reviewing program called ELECTRO. All steps were processed automatically except for the final sequencing review procedure with ELECTRO to confirm mutations. The data analysis process was completed within several hours. We confirmed the mutations that we have identified were consistent with our previous results obtained by using multi-step, manual pipelines.

  5. Design, Development, and Demonstration of a Prognostic and Diagnostics Health Monitoring System for the CROWS Platform

    DTIC Science & Technology

    2010-06-01

    automatically appended onto the data packet by the CC2420 transceiver. The frame control field (FCF), data sequence number, and frame check sequence (FCS...by the CC2420 over the MAC protocol data unit (MPDU), i.e., the length field is not part of the FCS. This field is automatically generated and...verified by the CC2420 hardware when the AUTOCRC control bit is set in the MODEMCTRL0 control register’s field . If the FCS check indicates that a data

  6. Technical evaluation of the novel preanalytical module on instrumentation laboratory ACL TOP: advancing automation in hemostasis testing.

    PubMed

    Lippi, Giuseppe; Ippolito, Luigi; Favaloro, Emmanuel J

    2013-10-01

    Automation in hemostasis testing is entering an exciting and unprecedented phase. This study was planned to assess the performance of the new preanalytical module on the hemostasis testing system Instrumentation Laboratory ACL TOP. The evaluation included interference studies to define reliable thresholds for rejecting samples with significant concentrations of interfering substances; within-run imprecision studies of plasma indices on four different interference degrees for each index; comparison studies with reference measures of hemolysis index, bilirubin, and triglycerides on clinical chemistry analyzers; and calculation of turnaround time with and without automatic performance of preanalytical check. The upper limits for sample rejection according to our interference studies were 3.6 g/L for hemoglobin, 13.6 mg/dL for bilirubin, and 1454 mg/dL for triglycerides. We found optimal precision for all indices (0.6% to 3.1% at clinically relevant thresholds) and highly significant correlations with reference measures on clinical chemistry analyzers (from 0.985 to 0.998). The limited increase of turnaround time (i.e., +3% and +5% with or without cap-piercing), coupled with no adjunctive costs over performance of normal coagulation assays, contribute to make the automatic check of plasma indices on ACL TOP a reliable and practical approach for improving testing quality and safeguarding patient safety.

  7. 46 CFR 61.30-20 - Automatic control and safety tests.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 2 2012-10-01 2012-10-01 false Automatic control and safety tests. 61.30-20 Section 61.30-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Tests and Inspections of Fired Thermal Fluid Heaters § 61.30-20 Automatic control and safety tests. Operational tests and check...

  8. Electronic circuit provides automatic level control for liquid nitrogen traps

    NASA Technical Reports Server (NTRS)

    Turvy, R. R.

    1968-01-01

    Electronic circuit, based on the principle of increased thermistor resistance corresponding to decreases in temperature provides an automatic level control for liquid nitrogen cold traps. The electronically controlled apparatus is practically service-free, requiring only occasional reliability checks.

  9. Automated generation of a World Wide Web-based data entry and check program for medical applications.

    PubMed

    Kiuchi, T; Kaihara, S

    1997-02-01

    The World Wide Web-based form is a promising method for the construction of an on-line data collection system for clinical and epidemiological research. It is, however, laborious to prepare a common gateway interface (CGI) program for each project, which the World Wide Web server needs to handle the submitted data. In medicine, it is even more laborious because the CGI program must check deficits, type, ranges, and logical errors (bad combination of data) of entered data for quality assurance as well as data length and meta-characters of the entered data to enhance the security of the server. We have extended the specification of the hypertext markup language (HTML) form to accommodate information necessary for such data checking and we have developed software named AUTOFORM for this purpose. The software automatically analyzes the extended HTML form and generates the corresponding ordinary HTML form, 'Makefile', and C source of CGI programs. The resultant CGI program checks the entered data through the HTML form, records them in a computer, and returns them to the end-user. AUTOFORM drastically reduces the burden of development of the World Wide Web-based data entry system and allows the CGI programs to be more securely and reliably prepared than had they been written from scratch.

  10. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  11. Computing with impure numbers - Automatic consistency checking and units conversion using computer algebra

    NASA Technical Reports Server (NTRS)

    Stoutemyer, D. R.

    1977-01-01

    The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.

  12. flowAI: automatic and interactive anomaly discerning tools for flow cytometry data.

    PubMed

    Monaco, Gianni; Chen, Hao; Poidinger, Michael; Chen, Jinmiao; de Magalhães, João Pedro; Larbi, Anis

    2016-08-15

    Flow cytometry (FCM) is widely used in both clinical and basic research to characterize cell phenotypes and functions. The latest FCM instruments analyze up to 20 markers of individual cells, producing high-dimensional data. This requires the use of the latest clustering and dimensionality reduction techniques to automatically segregate cell sub-populations in an unbiased manner. However, automated analyses may lead to false discoveries due to inter-sample differences in quality and properties. We present an R package, flowAI, containing two methods to clean FCM files from unwanted events: (i) an automatic method that adopts algorithms for the detection of anomalies and (ii) an interactive method with a graphical user interface implemented into an R shiny application. The general approach behind the two methods consists of three key steps to check and remove suspected anomalies that derive from (i) abrupt changes in the flow rate, (ii) instability of signal acquisition and (iii) outliers in the lower limit and margin events in the upper limit of the dynamic range. For each file analyzed our software generates a summary of the quality assessment from the aforementioned steps. The software presented is an intuitive solution seeking to improve the results not only of manual but also and in particular of automatic analysis on FCM data. R source code available through Bioconductor: http://bioconductor.org/packages/flowAI/ CONTACTS: mongianni1@gmail.com or Anis_Larbi@immunol.a-star.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. ConfocalCheck - A Software Tool for the Automated Monitoring of Confocal Microscope Performance

    PubMed Central

    Hng, Keng Imm; Dormann, Dirk

    2013-01-01

    Laser scanning confocal microscopy has become an invaluable tool in biomedical research but regular quality testing is vital to maintain the system’s performance for diagnostic and research purposes. Although many methods have been devised over the years to characterise specific aspects of a confocal microscope like measuring the optical point spread function or the field illumination, only very few analysis tools are available. Our aim was to develop a comprehensive quality assurance framework ranging from image acquisition to automated analysis and documentation. We created standardised test data to assess the performance of the lasers, the objective lenses and other key components required for optimum confocal operation. The ConfocalCheck software presented here analyses the data fully automatically. It creates numerous visual outputs indicating potential issues requiring further investigation. By storing results in a web browser compatible file format the software greatly simplifies record keeping allowing the operator to quickly compare old and new data and to spot developing trends. We demonstrate that the systematic monitoring of confocal performance is essential in a core facility environment and how the quantitative measurements obtained can be used for the detailed characterisation of system components as well as for comparisons across multiple instruments. PMID:24224017

  14. A new scintillator detector system for the quality assurance of 60Co and high-energy therapy machines.

    PubMed

    Beddar, A S

    1994-02-01

    A new single-channel detector system has been developed to perform routine quality assurance of 60Co and high-energy therapy machines. This detector is composed of an orange plastic scintillator, optically coupled to a radiation-resistant polycarbonate light pipe and a shielded silicon photodiode imbedded in a hollow solid water phantom block. No temperature and pressure corrections are required. Stability results were consistent with standard deviations fluctuating from 0.03% up to 0.09% for 60Co and from 0.05% up to 0.18% for other high energies. This device provides a quick, easy and reliable beam output check remotely, using an automatic reset based on a radiation triggering system device, storing multiple sequential readings. The reproducibility of this detector was checked on a daily and weekly basis at different energies (60Co, 6 MV and 18 MV x-rays and 6, 9, 12, 16 and 20 MeV electron beams). These results were found to be consistent with those obtained using an ion chamber. Other characteristics of this detector, including the consequences of the radiation-induced light in the light pipe (stem effect) and the radiation damage on this system are briefly discussed.

  15. Brewer spectrometer total ozone column measurements in Sodankylä

    NASA Astrophysics Data System (ADS)

    Karppinen, Tomi; Lakkala, Kaisa; Karhu, Juha M.; Heikkinen, Pauli; Kivi, Rigel; Kyrö, Esko

    2016-06-01

    Brewer total ozone column measurements started in Sodankylä in May 1988, 9 months after the signing of The Montreal Protocol. The Brewer instrument has been well maintained and frequently calibrated since then to produce a high-quality ozone time series now spanning more than 25 years. The data have now been uniformly reprocessed between 1988 and 2014. The quality of the data has been assured by automatic data rejection rules as well as by manual checking. Daily mean values calculated from the highest-quality direct sun measurements are available 77 % of time with up to 75 measurements per day on clear days. Zenith sky measurements fill another 14 % of the time series and winter months are sparsely covered by moon measurements. The time series provides information to survey the evolution of Arctic ozone layer and can be used as a reference point for assessing other total ozone column measurement practices.

  16. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT.

    PubMed

    Scholtz, Jan-Erik; Wichmann, Julian L; Kaup, Moritz; Fischer, Sebastian; Kerl, J Matthias; Lehnert, Thomas; Vogl, Thomas J; Bauer, Ralf W

    2015-03-01

    To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. 77 patients (28 women, 49 men, mean age 65.3±14.4 years) with known or suspected spinal disorders (degenerative spine disease n=32; disc herniation n=36; traumatic vertebral fractures n=9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p<0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p<0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time-saving when reconstructions of 2 and more vertebrae are performed. Checking results of automatic labeling is necessary to prevent errors in labeling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Toward high-throughput genotyping: dynamic and automatic software for manipulating large-scale genotype data using fluorescently labeled dinucleotide markers.

    PubMed

    Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W

    2001-07-01

    To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.

  18. 46 CFR 61.30-20 - Automatic control and safety tests.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false Automatic control and safety tests. 61.30-20 Section 61.30-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC... and safety tests. Operational tests and checks of all safety and limit controls, combustion controls...

  19. 46 CFR 61.30-20 - Automatic control and safety tests.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false Automatic control and safety tests. 61.30-20 Section 61.30-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC... and safety tests. Operational tests and checks of all safety and limit controls, combustion controls...

  20. 46 CFR 61.30-20 - Automatic control and safety tests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false Automatic control and safety tests. 61.30-20 Section 61.30-20 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC... and safety tests. Operational tests and checks of all safety and limit controls, combustion controls...

  1. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  2. 40 CFR 89.411 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and the values recorded. The number of events that may occur between the pre- and post-analysis checks... drift nor the span drift between the pre-analysis and post-analysis checks on any range used may exceed... Emission Test Procedures § 89.411 Exhaust sample procedure—gaseous components. (a) Automatic data...

  3. 40 CFR 89.411 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and the values recorded. The number of events that may occur between the pre- and post-analysis checks... drift nor the span drift between the pre-analysis and post-analysis checks on any range used may exceed... Emission Test Procedures § 89.411 Exhaust sample procedure—gaseous components. (a) Automatic data...

  4. 40 CFR 90.413 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the values recorded. The number of events that may occur between the pre- and post-checks is not.... (9) Neither the zero drift nor the span drift between the pre-analysis and post-analysis checks on... Gaseous Exhaust Test Procedures § 90.413 Exhaust sample procedure—gaseous components. (a) Automatic data...

  5. 40 CFR 90.413 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the values recorded. The number of events that may occur between the pre- and post-checks is not.... (9) Neither the zero drift nor the span drift between the pre-analysis and post-analysis checks on... Gaseous Exhaust Test Procedures § 90.413 Exhaust sample procedure—gaseous components. (a) Automatic data...

  6. 40 CFR 90.413 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the values recorded. The number of events that may occur between the pre- and post-checks is not.... (9) Neither the zero drift nor the span drift between the pre-analysis and post-analysis checks on... Gaseous Exhaust Test Procedures § 90.413 Exhaust sample procedure—gaseous components. (a) Automatic data...

  7. 40 CFR 90.413 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the values recorded. The number of events that may occur between the pre- and post-checks is not.... (9) Neither the zero drift nor the span drift between the pre-analysis and post-analysis checks on... Gaseous Exhaust Test Procedures § 90.413 Exhaust sample procedure—gaseous components. (a) Automatic data...

  8. 40 CFR 89.411 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and the values recorded. The number of events that may occur between the pre- and post-analysis checks... drift nor the span drift between the pre-analysis and post-analysis checks on any range used may exceed... Emission Test Procedures § 89.411 Exhaust sample procedure—gaseous components. (a) Automatic data...

  9. 40 CFR 89.411 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and the values recorded. The number of events that may occur between the pre- and post-analysis checks... drift nor the span drift between the pre-analysis and post-analysis checks on any range used may exceed... Emission Test Procedures § 89.411 Exhaust sample procedure—gaseous components. (a) Automatic data...

  10. 40 CFR 89.411 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and the values recorded. The number of events that may occur between the pre- and post-analysis checks... drift nor the span drift between the pre-analysis and post-analysis checks on any range used may exceed... Emission Test Procedures § 89.411 Exhaust sample procedure—gaseous components. (a) Automatic data...

  11. 40 CFR 90.413 - Exhaust sample procedure-gaseous components.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the values recorded. The number of events that may occur between the pre- and post-checks is not.... (9) Neither the zero drift nor the span drift between the pre-analysis and post-analysis checks on... Gaseous Exhaust Test Procedures § 90.413 Exhaust sample procedure—gaseous components. (a) Automatic data...

  12. Rainfall, Streamflow, and Water-Quality Data During Stormwater Monitoring, Halawa Stream Drainage Basin, Oahu, Hawaii, July 1, 2005 to June 30, 2006

    USGS Publications Warehouse

    Presley, Todd K.; Jamison, Marcael T.J.; Young-Smith, Stacie T. M.

    2006-01-01

    Storm runoff water-quality samples were collected as part of the State of Hawaii Department of Transportation Stormwater Monitoring Program. This program is designed to assess the effects of highway runoff and urban runoff on Halawa Stream. For this program, rainfall data were collected at two stations, continuous discharge data at one station, continuous streamflow data at two stations, and water-quality data at five stations, which include the continuous discharge and streamflow stations. This report summarizes rainfall, discharge, streamflow, and water-quality data collected between July 1, 2005 and June 30, 2006. A total of 23 samples was collected over five storms during July 1, 2005 to June 30, 2006. The goal was to collect grab samples nearly simultaneously at all five stations, and flow-weighted time-composite samples at the three stations equipped with automatic samplers; however, all five storms were partially sampled owing to lack of flow at the time of sampling at some sites, or because some samples collected by the automatic sampler did not represent water from the storm. Samples were analyzed for total suspended solids, total dissolved solids, nutrients, chemical oxygen demand, and selected trace metals (cadmium, chromium, copper, lead, nickel, and zinc). Additionally, grab samples were analyzed for oil and grease, total petroleum hydrocarbons, fecal coliform, and biological oxygen demand. Quality-assurance/quality-control samples were also collected during storms and during routine maintenance to verify analytical procedures and check the effectiveness of equipment-cleaning procedures.

  13. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  14. MilxXplore: a web-based system to explore large imaging datasets.

    PubMed

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  15. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  16. Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashkin, Hannah J.; Choi, Eunsol; Jang, Jin Yea

    We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoax, and propaganda to find linguistic cues for untruthful text. To probe the feasibility of automatic political fact-checking, we present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experimental results show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.

  17. Control of the TSU 2-m automatic telescope

    NASA Astrophysics Data System (ADS)

    Eaton, Joel A.; Williamson, Michael H.

    2004-09-01

    Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.

  18. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  19. Ensuring the Quality of Data Packages in the LTER Network Provenance Aware Synthesis Tracking Architecture Data Management System and Archive

    NASA Astrophysics Data System (ADS)

    Servilla, M. S.; O'Brien, M.; Costa, D.

    2013-12-01

    Considerable ecological research performed today occurs through the analysis of data downloaded from various repositories and archives, often resulting in derived or synthetic products generated by automated workflows. These data are only meaningful for research if they are well documented by metadata, lest semantic or data type errors may occur in interpretation or processing. The Long Term Ecological Research (LTER) Network now screens all data packages entering its long-term archive to ensure that each package contains metadata that is complete, of high quality, and accurately describes the structure of its associated data entity and the data are structurally congruent to the metadata. Screening occurs prior to the upload of a data package into the Provenance Aware Synthesis Tracking Architecture (PASTA) data management system through a series of quality checks, thus preventing ambiguously or incorrectly documented data packages from entering the system. The quality checks within PASTA are designed to work specifically with the Ecological Metadata Language (EML), the metadata standard adopted by the LTER Network to describe data generated by their 26 research sites. Each quality check is codified in Java as part of the ecological community-supported Data Manager Library, which is a resource of the EML specification and used as a component of the PASTA software stack. Quality checks test for metadata quality, data integrity, or metadata-data congruence. Quality checks are further classified as either conditional or informational. Conditional checks issue a 'valid', 'warning' or 'error' response. Only an 'error' response blocks the data package from upload into PASTA. Informational checks only provide descriptive content pertaining to a particular facet of the data package. Quality checks are designed by a group of LTER information managers and reviewed by the LTER community before deploying into PASTA. A total of 32 quality checks have been deployed to date. Quality checks can be customized through a configurable template, which includes turning checks 'on' or 'off' and setting the severity of conditional checks. This feature is important to other potential users of the Data Manager Library who wish to configure its quality checks in accordance with the standards of their community. Executing the complete set of quality checks produces a report that describes the result of each check. The report is an XML document that is stored by PASTA for future reference.

  20. Automatic airline baggage counting using 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  1. Study on Classification Accuracy Inspection of Land Cover Data Aided by Automatic Image Change Detection Technology

    NASA Astrophysics Data System (ADS)

    Xie, W.-J.; Zhang, L.; Chen, H.-P.; Zhou, J.; Mao, W.-J.

    2018-04-01

    The purpose of carrying out national geographic conditions monitoring is to obtain information of surface changes caused by human social and economic activities, so that the geographic information can be used to offer better services for the government, enterprise and public. Land cover data contains detailed geographic conditions information, thus has been listed as one of the important achievements in the national geographic conditions monitoring project. At present, the main issue of the production of the land cover data is about how to improve the classification accuracy. For the land cover data quality inspection and acceptance, classification accuracy is also an important check point. So far, the classification accuracy inspection is mainly based on human-computer interaction or manual inspection in the project, which are time consuming and laborious. By harnessing the automatic high-resolution remote sensing image change detection technology based on the ERDAS IMAGINE platform, this paper carried out the classification accuracy inspection test of land cover data in the project, and presented a corresponding technical route, which includes data pre-processing, change detection, result output and information extraction. The result of the quality inspection test shows the effectiveness of the technical route, which can meet the inspection needs for the two typical errors, that is, missing and incorrect update error, and effectively reduces the work intensity of human-computer interaction inspection for quality inspectors, and also provides a technical reference for the data production and quality control of the land cover data.

  2. Mitigating energy loss on distribution lines through the allocation of reactors

    NASA Astrophysics Data System (ADS)

    Miranda, T. M.; Romero, F.; Meffe, A.; Castilho Neto, J.; Abe, L. F. T.; Corradi, F. E.

    2018-03-01

    This paper presents a methodology for automatic reactors allocation on medium voltage distribution lines to reduce energy loss. In Brazil, some feeders are distinguished by their long lengths and very low load, which results in a high influence of the capacitance of the line on the circuit’s performance, requiring compensation through the installation of reactors. The automatic allocation is accomplished using an optimization meta-heuristic called Global Neighbourhood Algorithm. Given a set of reactor models and a circuit, it outputs an optimal solution in terms of reduction of energy loss. The algorithm is also able to verify if the voltage limits determined by the user are not being violated, besides checking for energy quality. The methodology was implemented in a software tool, which can also show the allocation graphically. A simulation with four real feeders is presented in the paper. The obtained results were able to reduce the energy loss significantly, from 50.56%, in the worst case, to 93.10%, in the best case.

  3. Network design and quality checks in automatic orientation of close-range photogrammetric blocks.

    PubMed

    Dall'Asta, Elisa; Thoeni, Klaus; Santise, Marina; Forlani, Gianfranco; Giacomini, Anna; Roncella, Riccardo

    2015-04-03

    Due to the recent improvements of automatic measurement procedures in photogrammetry, multi-view 3D reconstruction technologies are becoming a favourite survey tool. Rapidly widening structure-from-motion (SfM) software packages offer significantly easier image processing workflows than traditional photogrammetry packages. However, while most orientation and surface reconstruction strategies will almost always succeed in any given task, estimating the quality of the result is, to some extent, still an open issue. An assessment of the precision and reliability of block orientation is necessary and should be included in every processing pipeline. Such a need was clearly felt from the results of close-range photogrammetric surveys of in situ full-scale and laboratory-scale experiments. In order to study the impact of the block control and the camera network design on the block orientation accuracy, a series of Monte Carlo simulations was performed. Two image block configurations were investigated: a single pseudo-normal strip and a circular highly-convergent block. The influence of surveying and data processing choices, such as the number and accuracy of the ground control points, autofocus and camera calibration was investigated. The research highlights the most significant aspects and processes to be taken into account for adequate in situ and laboratory surveys, when modern SfM software packages are used, and evaluates their effect on the quality of the results of the surface reconstruction.

  4. Vital Recorder-a free research tool for automatic recording of high-resolution time-synchronised physiological data from multiple anaesthesia devices.

    PubMed

    Lee, Hyung-Chul; Jung, Chul-Woo

    2018-01-24

    The current anaesthesia information management system (AIMS) has limited capability for the acquisition of high-quality vital signs data. We have developed a Vital Recorder program to overcome the disadvantages of AIMS and to support research. Physiological data of surgical patients were collected from 10 operating rooms using the Vital Recorder. The basic equipment used were a patient monitor, the anaesthesia machine, and the bispectral index (BIS) monitor. Infusion pumps, cardiac output monitors, regional oximeter, and rapid infusion device were added as required. The automatic recording option was used exclusively and the status of recording was frequently checked through web monitoring. Automatic recording was successful in 98.5% (4,272/4,335) cases during eight months of operation. The total recorded time was 13,489 h (3.2 ± 1.9 h/case). The Vital Recorder's automatic recording and remote monitoring capabilities enabled us to record physiological big data with minimal effort. The Vital Recorder also provided time-synchronised data captured from a variety of devices to facilitate an integrated analysis of vital signs data. The free distribution of the Vital Recorder is expected to improve data access for researchers attempting physiological data studies and to eliminate inequalities in research opportunities due to differences in data collection capabilities.

  5. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  6. A New "Moodle" Module Supporting Automatic Verification of VHDL-Based Assignments

    ERIC Educational Resources Information Center

    Gutierrez, Eladio; Trenas, Maria A.; Ramos, Julian; Corbera, Francisco; Romero, Sergio

    2010-01-01

    This work describes a new "Moodle" module developed to give support to the practical content of a basic computer organization course. This module goes beyond the mere hosting of resources and assignments. It makes use of an automatic checking and verification engine that works on the VHDL designs submitted by the students. The module automatically…

  7. AUTOMOTIVE DIESEL MAINTENANCE 2. UNIT X, AUTOMATIC TRANSMISSIONS--HYDRAULIC SYSTEMS (PART II).

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 25-MODULE COURSE IS DESIGNED TO PROVIDE A SUMMARY OF MAINTENANCE PROCEDURES FOR AUTOMATIC TRANSMISSIONS USED ON DIESEL POWERED VEHICLES. TOPICS ARE (1) CHECKING THE HYDRAULIC SYSTEM, (2) SERVICING THE HYDRAULIC SYSTEM, (3) EXAMINING THE RANGE CONTROL VALVE, (4) EXAMINING THE LOCK-UP AND FLOW VALVE, (5) EXAMINING THE MAIN REGULATOR…

  8. A study of the use of abstract types for the representation of engineering units in integration and test applications

    NASA Technical Reports Server (NTRS)

    Johnson, Charles S.

    1986-01-01

    Physical quantities using various units of measurement can be well represented in Ada by the use of abstract types. Computation involving these quantities (electric potential, mass, volume) can also automatically invoke the computation and checking of some of the implicitly associable attributes of measurements. Quantities can be held internally in SI units, transparently to the user, with automatic conversion. Through dimensional analysis, the type of the derived quantity resulting from a computation is known, thereby allowing dynamic checks of the equations used. The impact of the possible implementation of these techniques in integration and test applications is discussed. The overhead of computing and transporting measurement attributes is weighed against the advantages gained by their use. The construction of a run time interpreter using physical quantities in equations can be aided by the dynamic equation checks provided by dimensional analysis. The effects of high levels of abstraction on the generation and maintenance of software used in integration and test applications are also discussed.

  9. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  10. Formalization and analysis of reasoning by assumption.

    PubMed

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  11. Functional requirements regarding medical registries--preliminary results.

    PubMed

    Oberbichler, Stefan; Hörbst, Alexander

    2013-01-01

    The term medical registry is used to reference tools and processes to support clinical or epidemiologic research or provide a data basis for decisions regarding health care policies. In spite of this wide range of applications the term registry and the functional requirements which a registry should support are not clearly defined. This work presents preliminary results of a literature review to discover functional requirements which form a registry. To extract these requirements a set of peer reviewed articles was collected. These set of articles was screened by using methods from qualitative research. Up to now most discovered functional requirements focus on data quality (e. g. prevent transcription error by conducting automatic domain checks).

  12. Machine vision method for online surface inspection of easy open can ends

    NASA Astrophysics Data System (ADS)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  13. Automatic monitoring of vibration welding equipment

    DOEpatents

    Spicer, John Patrick; Chakraborty, Debejyo; Wincek, Michael Anthony; Wang, Hui; Abell, Jeffrey A; Bracey, Jennifer; Cai, Wayne W

    2014-10-14

    A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host device, a check station, and a robot. The robot moves the horn and anvil via an arm to the check station. Sensors, e.g., temperature sensors, are positioned with respect to the welding equipment. Additional sensors are positioned with respect to the check station, including a pressure-sensitive array. The host device, which monitors a condition of the welding equipment, measures signals via the sensors positioned with respect to the welding equipment when the horn is actively forming a weld. The robot moves the horn and anvil to the check station, activates the check station sensors at the check station, and determines a condition of the welding equipment by processing the received signals. Acoustic, force, temperature, displacement, amplitude, and/or attitude/gyroscopic sensors may be used.

  14. ATLAS offline data quality monitoring

    NASA Astrophysics Data System (ADS)

    Adelman, J.; Baak, M.; Boelaert, N.; D'Onofrio, M.; Frost, J. A.; Guyot, C.; Hauschild, M.; Hoecker, A.; Leney, K. J. C.; Lytken, E.; Martinez-Perez, M.; Masik, J.; Nairz, A. M.; Onyisi, P. U. E.; Roe, S.; Schaetzel, S.; Wilson, M. G.

    2010-04-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  15. pySeismicDQA: open source post experiment data quality assessment and processing

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin

    2017-04-01

    Seismic Data Quality Assessment is python based, open source set of tools dedicated for data processing after passive seismic experiments. Primary goal of this toolset is unification of data types and formats from different dataloggers necessary for further processing. This process requires additional data checks for errors, equipment malfunction, data format errors, abnormal noise levels, etc. In all such cases user needs to decide (manually or by automatic threshold) if data is removed from output dataset. Additionally, output dataset can be visualized in form of website with data availability charts and waveform visualization with earthquake catalog (external). Data processing can be extended with simple STA/LTA event detection. pySeismicDQA is designed and tested for two passive seismic experiments in central Europe: PASSEQ 2006-2008 and "13 BB Star" (2013-2016). National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  16. MilxXplore: a web-based system to explore large imaging datasets

    PubMed Central

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173

  17. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X; Li, S; Zheng, D

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheetmore » every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm{sup 2}. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.« less

  18. SU-F-T-458: Tracking Trends of TG-142 Parameters Via Analysis of Data Recorded by 2D Chamber Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrian, A; Kabat, C; Defoor, D

    Purpose: With increasing QA demands of medical physicists in clinical radiation oncology, the need for an effective method of tracking clinical data has become paramount. A tool was produced which scans through data automatically recorded by a 2D chamber array and extracts relevant information recommended by TG-142. Using this extracted information a timely and comprehensive analysis of QA parameters can be easily performed enabling efficient monthly checks on multiple linear accelerators simultaneously. Methods: A PTW STARCHECK chamber array was used to record several months of beam outputs from two Varian 2100 series linear accelerators and a Varian NovalisTx−. In conjunctionmore » with the chamber array, a beam quality phantom was used to simultaneously to determine beam quality. A minimalist GUI was created in MatLab that allows a user to set the file path of the data for each modality to be analyzed. These file paths are recorded to a MatLab structure and then subsequently accessed by a script written in Python (version 3.5.1) which then extracts values required to perform monthly checks as outlined by recommendations from TG-142. The script incorporates calculations to determine if the values recorded by the chamber array fall within an acceptable threshold. Results: Values obtained by the script are written to a spreadsheet where results can be easily viewed and annotated with a “pass” or “fail” and saved for further analysis. In addition to creating a new scheme for reviewing monthly checks, this application allows for able to succinctly store data for follow up analysis. Conclusion: By utilizing this tool, parameters recommended by TG-142 for multiple linear accelerators can be rapidly obtained and analyzed which can be used for evaluation of monthly checks.« less

  19. 7 CFR 58.243 - Checking quality.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Checking quality. 58.243 Section 58.243 Agriculture... Procedures § 58.243 Checking quality. All milk, milk products and dry milk products shall be subject to inspection and analysis by the dairy plant for quality and condition throughout each processing operation...

  20. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Industrial application of low voltage bidirectional automatic release of reserve

    NASA Astrophysics Data System (ADS)

    Popa, G. N.; Diniş, C. M.; Iagăr, A.; Deaconu, S. I.; Popa, I.

    2018-01-01

    The paper presents an analysis on low voltage industrial electrical installation controlled by bidirectional automatic release of reserve. Industrial electrical installation is for removing smoke in case of fire from a textile company. The main parts of the installation of removing smoke in case of fire are: general electrical panel; reserve electrical panel; three-phase induction motors for driven fans; electrical actuators for inlet and outlet valves; clean air inlet pipe, respectively, the outlet pipe for smoke. The operation and checking of bidirectional automatic release of reserve are present in the paper.

  2. Automatic system for ionization chamber current measurements.

    PubMed

    Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F

    2004-12-01

    The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.

  3. Automated detection of records in biological sequence databases that are inconsistent with the literature.

    PubMed

    Bouadjenek, Mohamed Reda; Verspoor, Karin; Zobel, Justin

    2017-07-01

    We investigate and analyse the data quality of nucleotide sequence databases with the objective of automatic detection of data anomalies and suspicious records. Specifically, we demonstrate that the published literature associated with each data record can be used to automatically evaluate its quality, by cross-checking the consistency of the key content of the database record with the referenced publications. Focusing on GenBank, we describe a set of quality indicators based on the relevance paradigm of information retrieval (IR). Then, we use these quality indicators to train an anomaly detection algorithm to classify records as "confident" or "suspicious". Our experiments on the PubMed Central collection show assessing the coherence between the literature and database records, through our algorithms, is an effective mechanism for assisting curators to perform data cleansing. Although fewer than 0.25% of the records in our data set are known to be faulty, we would expect that there are many more in GenBank that have not yet been identified. By automated comparison with literature they can be identified with a precision of up to 10% and a recall of up to 30%, while strongly outperforming several baselines. While these results leave substantial room for improvement, they reflect both the very imbalanced nature of the data, and the limited explicitly labelled data that is available. Overall, the obtained results show promise for the development of a new kind of approach to detecting low-quality and suspicious sequence records based on literature analysis and consistency. From a practical point of view, this will greatly help curators in identifying inconsistent records in large-scale sequence databases by highlighting records that are likely to be inconsistent with the literature. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  5. Patients with schizophrenia do not preserve automatic grouping when mentally re-grouping figures: shedding light on an ignored difficulty.

    PubMed

    Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L; Marrer, Corinne; Gounot, Daniel

    2012-01-01

    Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally "re-group" two objects that are not only separate but belong to different pairs of objects. "Re-grouping" is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and "re-grouping," or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and "re-grouping." Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how "re-grouping" ties in with ongoing, automatic perception in healthy volunteers.

  6. Patients with Schizophrenia Do Not Preserve Automatic Grouping When Mentally Re-Grouping Figures: Shedding Light on an Ignored Difficulty

    PubMed Central

    Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L.; Marrer, Corinne; Gounot, Daniel

    2012-01-01

    Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally “re-group” two objects that are not only separate but belong to different pairs of objects. “Re-grouping” is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and “re-grouping,” or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and “re-grouping.” Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how “re-grouping” ties in with ongoing, automatic perception in healthy volunteers. PMID:22912621

  7. Quality monitored distributed voting system

    DOEpatents

    Skogmo, David

    1997-01-01

    A quality monitoring system can detect certain system faults and fraud attempts in a distributed voting system. The system uses decoy voters to cast predetermined check ballots. Absent check ballots can indicate system faults. Altered check ballots can indicate attempts at counterfeiting votes. The system can also cast check ballots at predetermined times to provide another check on the distributed voting system.

  8. WQEP - a computer spreadsheet program to evaluate water quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liddle, R.G.

    1996-12-31

    A flexible spreadsheet Water Quality Evaluation Program (WQEP) has been developed for mining companies, consultants, and regulators to interpret the results of water quality sampling. In order properly to evaluate hydrologic data, unit conversions and chemical calculations are done, quality control checks are needed, and a complete and up-to-date listing of water quality standards is necessary. This process is time consuming and tends not to be done for every sample. This program speeds the process by allowing the input of up to 115 chemical parameters from one sample. WQEP compares concentrations with EPA primary and secondary drinking water MCLs ormore » MCLG, EPA warmwater and Coldwater acute and chronic aquatic life criteria, irrigation criteria, livestock criteria, EPA human health criteria, and several other categories of criteria. The spreadsheet allows the input of State or local water standards of interest. Water quality checks include: anion/cations, TDS{sub m}/TDS{sub c} (where m=measured and c=calculated), EC{sub m}/EC{sub c}, EC{sub m}/ion sums, TDS{sub c}/EC ratio, TDS{sub m}/EC, EC vs. alkalinity, two hardness values, and EC vs. {Sigma} cations. WQEP computes the dissolved transport index of 23 parameters, computes ratios of 26 species for trend analysis, calculates non-carbonate alkalinity to adjust the bicarbonate concentration, and calculates 35 interpretive formulas (pE, SAR, S.I., unionized ammonia, ionized sulfide HS-, pK{sub x} values, etc.). Fingerprinting is conducted by automatic generation of stiff diagrams and ion histograms. Mass loading calculations, mass balance calculations, conversions of concentrations, ionic strength, and the activity coefficient and chemical activity of 33 parameters is calculated. This program allows a speedy and thorough evaluation of water quality data from metal mines, coal mining, and natural surface water systems and has been tested against hand calculations.« less

  9. FliPer: checking the reliability of global seismic parameters from automatic pipelines

    NASA Astrophysics Data System (ADS)

    Bugnet, L.; García, R. A.; Davies, G. R.; Mathur, S.; Corsaro, E.

    2017-12-01

    Our understanding of stars through asteroseismic data analysis is limited by our ability to take advantage of the huge amount of observed stars provided by space missions such as CoRoT, \\keplerp, \\ktop, and soon TESS and PLATO. Global seismic pipelines provide global stellar parameters such as mass and radius using the mean seismic parameters, as well as the effective temperature. These pipelines are commonly used automatically on thousands of stars observed by K2 for 3 months (and soon TESS for at least ˜ 1 month). However, pipelines are not immune from misidentifying noise peaks and stellar oscillations. Therefore, new validation techniques are required to assess the quality of these results. We present a new metric called FliPer (Flicker in Power), which takes into account the average variability at all measured time scales. The proper calibration of \\powvar enables us to obtain good estimations of global stellar parameters such as surface gravity that are robust against the influence of noise peaks and hence are an excellent way to find faults in asteroseismic pipelines.

  10. Telecommunications Systems Career Ladder, AFSC 307XO.

    DTIC Science & Technology

    1981-01-01

    standard test tone levels perform impulse noise tests make in-service or out-of- service quality check.s on composite signal transmission levels Even...service or out-of- service quality control (QC) reports maintain trouble and restoration record forms (DD Form 1443) direct circuit or system checks...include: perform fault isolation on analog circuits make in-service or out-of- service quality checks on voice frequency carrier telegraph (VFCT) terminals

  11. Automatic Digital Switching Specialist Career Ladder: United States Air Force Job Inventory. AFSCs 29530, 29570, and 29590.

    ERIC Educational Resources Information Center

    Air Force Personnel and Training Research Center, Lackland AFB, TX.

    The U. S. Air Force job inventory for the automatic digital switching specialist career ladder is divided into 12 categories, each of which is broken down into a duty-task list. Space is provided for Air Force personnel filling out the inventory to check whether each task is at present part of their duties. The 12 categories are: organizing and…

  12. 26 CFR 1.6081-5 - Extensions of time in the case of certain partnerships, corporations and U.S. citizens and...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... year and check the appropriate box on Form 4868, “Application for Automatic Extension of Time To File a U.S. Individual Income Tax Return,” or Form 7004, “Application for Automatic Extension of Time to... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Extensions of time in the case of certain...

  13. Quality monitored distributed voting system

    DOEpatents

    Skogmo, D.

    1997-03-18

    A quality monitoring system can detect certain system faults and fraud attempts in a distributed voting system. The system uses decoy voters to cast predetermined check ballots. Absent check ballots can indicate system faults. Altered check ballots can indicate attempts at counterfeiting votes. The system can also cast check ballots at predetermined times to provide another check on the distributed voting system. 6 figs.

  14. The automatic back-check mechanism of mask tooling database and automatic transmission of mask tooling data

    NASA Astrophysics Data System (ADS)

    Xu, Zhe; Peng, M. G.; Tu, Lin Hsin; Lee, Cedric; Lin, J. K.; Jan, Jian Feng; Yin, Alb; Wang, Pei

    2006-10-01

    Nowadays, most foundries have paid more and more attention in order to reduce the CD width. Although the lithography technologies have developed drastically, mask data accuracy is still a big challenge than before. Besides, mask (reticle) price also goes up drastically such that data accuracy needs more special treatments.We've developed a system called eFDMS to guarantee the mask data accuracy. EFDMS is developed to do the automatic back-check of mask tooling database and the data transmission of mask tooling. We integrate our own EFDMS systems to engage with the standard mask tooling system K2 so that the upriver and the downriver processes of the mask tooling main body K2 can perform smoothly and correctly with anticipation. The competition in IC marketplace is changing from high-tech process to lower-price gradually. How to control the reduction of the products' cost more plays a significant role in foundries. Before the violent competition's drawing nearer, we should prepare the cost task ahead of time.

  15. ActiveSeismoPick3D - automatic first arrival determination for large active seismic arrays

    NASA Astrophysics Data System (ADS)

    Paffrath, Marcel; Küperkoch, Ludger; Wehling-Benatelli, Sebastian; Friederich, Wolfgang

    2016-04-01

    We developed a tool for automatic determination of first arrivals in active seismic data based on an approach, that utilises higher order statistics (HOS) and the Akaike information criterion (AIC), commonly used in seismology, but not in active seismics. Automatic picking is highly desirable in active seismics as the number of data provided by large seismic arrays rapidly exceeds of what an analyst can evaluate in a reasonable amount of time. To bring the functionality of automatic phase picking into the context of active data, the software package ActiveSeismoPick3D was developed in Python. It uses a modified algorithm for the determination of first arrivals which searches for the HOS maximum in unfiltered data. Additionally, it offers tools for manual quality control and postprocessing, e.g. various visualisation and repicking functionalities. For flexibility, the tool also includes methods for the preparation of geometry information of large seismic arrays and improved interfaces to the Fast Marching Tomography Package (FMTOMO), which can be used for the prediction of travel times and inversion for subsurface properties. Output files are generated in the VTK format, allowing the 3D visualization of e.g. the inversion results. As a test case, a data set consisting of 9216 traces from 64 shots was gathered, recorded at 144 receivers deployed in a regular 2D array of a size of 100 x 100 m. ActiveSeismoPick3D automatically checks the determined first arrivals by a dynamic signal to noise ratio threshold. From the data a 3D model of the subsurface was generated using the export functionality of the package and FMTOMO.

  16. Community air monitoring and the Village Green Project ...

    EPA Pesticide Factsheets

    Cost and logistics are practical issues that have historically constrained the number of locations where long-term, active air pollution measurement is possible. In addition, traditional air monitoring approaches are generally conducted by technical experts with limited engagement with community members. EPA’s Village Green Project (VGP) is a prototype technology designed to add value to a community environment – VGP is a park bench equipped with air and meteorological instruments that measure ozone, fine particles, wind, temperature, and humidity at a one-minute time resolution, with the open-source Arduino microprocessor operating as the system controller. The data are streamed wirelessly to a database, passed through automatic diagnostic quality checks, and then made publically available on an engaging website. The station was designed to minimize power use; it consumes an estimated 15W and operates entirely on solar power, is engineered to run for several days with minimal solar radiation, and is capable of automatically shutting down components of the system to conserve power and restarting when power availability increases. Situated outside a public library in Durham, North Carolina, VGP has also been a gathering location for air quality experts to engage with community members. During the time span of June, 2013 through January, 2014, the station collected about 3500 hours of ozone and PM2.5 data, with over 90% up-time operating only on solar po

  17. Data Quality Verification at STScI - Automated Assessment and Your Data

    NASA Astrophysics Data System (ADS)

    Dempsey, R.; Swade, D.; Scott, J.; Hamilton, F.; Holm, A.

    1996-12-01

    As satellite based observatories improve their ability to deliver wider varieties and more complex types of scientific data, so to does the process of analyzing and reducing these data. It becomes correspondingly imperative that Guest Observers or Archival Researchers have access to an accurate, consistent, and easily understandable summary of the quality of their data. Previously, at the STScI, an astronomer would display and examine the quality and scientific usefulness of every single observation obtained with HST. Recently, this process has undergone a major reorganization at the Institute. A major part of the new process is that the majority of data are assessed automatically with little or no human intervention. As part of routine processing in the OSS--PODPS Unified System (OPUS), the Observatory Monitoring System (OMS) observation logs, the science processing trailer file (also known as the TRL file), and the science data headers are inspected by an automated tool, AUTO_DQ. AUTO_DQ then determines if any anomalous events occurred during the observation or through processing and calibration of the data that affects the procedural quality of the data. The results are placed directly into the Procedural Data Quality (PDQ) file as a string of predefined data quality keywords and comments. These in turn are used by the Contact Scientist (CS) to check the scientific usefulness of the observations. In this manner, the telemetry stream is checked for known problems such as losses of lock, re-centerings, or degraded guiding, for example, while missing data or calibration errors are also easily flagged. If the problem is serious, the data are then queued for manual inspection by an astronomer. The success of every target acquisition is verified manually. If serious failures are confirmed, the PI and the scheduling staff are notified so that options concerning rescheduling the observations can be explored.

  18. QCScreen: a software tool for data quality control in LC-HRMS based metabolomics.

    PubMed

    Simader, Alexandra Maria; Kluger, Bernhard; Neumann, Nora Katharina Nicole; Bueschl, Christoph; Lemmens, Marc; Lirk, Gerald; Krska, Rudolf; Schuhmacher, Rainer

    2015-10-24

    Metabolomics experiments often comprise large numbers of biological samples resulting in huge amounts of data. This data needs to be inspected for plausibility before data evaluation to detect putative sources of error e.g. retention time or mass accuracy shifts. Especially in liquid chromatography-high resolution mass spectrometry (LC-HRMS) based metabolomics research, proper quality control checks (e.g. for precision, signal drifts or offsets) are crucial prerequisites to achieve reliable and comparable results within and across experimental measurement sequences. Software tools can support this process. The software tool QCScreen was developed to offer a quick and easy data quality check of LC-HRMS derived data. It allows a flexible investigation and comparison of basic quality-related parameters within user-defined target features and the possibility to automatically evaluate multiple sample types within or across different measurement sequences in a short time. It offers a user-friendly interface that allows an easy selection of processing steps and parameter settings. The generated results include a coloured overview plot of data quality across all analysed samples and targets and, in addition, detailed illustrations of the stability and precision of the chromatographic separation, the mass accuracy and the detector sensitivity. The use of QCScreen is demonstrated with experimental data from metabolomics experiments using selected standard compounds in pure solvent. The application of the software identified problematic features, samples and analytical parameters and suggested which data files or compounds required closer manual inspection. QCScreen is an open source software tool which provides a useful basis for assessing the suitability of LC-HRMS data prior to time consuming, detailed data processing and subsequent statistical analysis. It accepts the generic mzXML format and thus can be used with many different LC-HRMS platforms to process both multiple quality control sample types as well as experimental samples in one or more measurement sequences.

  19. Automatic monitoring of the alignment and wear of vibration welding equipment

    DOEpatents

    Spicer, John Patrick; Cai, Wayne W.; Chakraborty, Debejyo; Mink, Keith

    2017-05-23

    A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host machine, a check station, and a welding robot. At least one displacement sensor is positioned with respect to one of the welding equipment and the check station. The robot moves the horn and anvil via an arm to the check station, when a threshold condition is met, i.e., a predetermined amount of time has elapsed or a predetermined number of welds have been completed. The robot moves the horn and anvil to the check station, activates the at least one displacement sensor, at the check station, and determines a status condition of the welding equipment by processing the received signals. The status condition may be one of the alignment of the vibration welding equipment and the wear or degradation of the vibration welding equipment.

  20. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

  1. Implementing Model-Check for Employee and Management Satisfaction

    NASA Technical Reports Server (NTRS)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  2. Rainfall, Streamflow, and Water-Quality Data During Stormwater Monitoring, Halawa Stream Drainage Basin, Oahu, Hawaii, July 1, 2006 to June 30, 2007

    USGS Publications Warehouse

    Young, Stacie T.M.; Jamison, Marcael T.J.

    2007-01-01

    Storm runoff water-quality samples were collected as part of the State of Hawaii Department of Transportation Stormwater Monitoring Program. This program is designed to assess the effects of highway runoff and urban runoff on Halawa Stream. For this program, rainfall data were collected at two stations, continuous streamflow data at three stations, and water-quality data at five stations, which include the two continuous streamflow stations. This report summarizes rainfall, streamflow, and water-quality data collected between July 1, 2006 and June 30, 2007. A total of 13 samples was collected over two storms during July 1, 2006 to June 30, 2007. The goal was to collect grab samples nearly simultaneously at all five stations and flow-weighted time-composite samples at the three stations equipped with automatic samplers. Samples were analyzed for total suspended solids, total dissolved solids, nutrients, chemical oxygen demand, and selected trace metals (cadmium, chromium, copper, lead, nickel, and zinc). Additionally, grab samples were analyzed for oil and grease, total petroleum hydrocarbons, fecal coliform, and biological oxygen demand. Quality-assurance/quality-control samples were also collected during storms and during routine maintenance to verify analytical procedures and check the effectiveness of equipment-cleaning procedures.

  3. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  4. First experiences with the LHC BLM sanity checks

    NASA Astrophysics Data System (ADS)

    Emery, J.; Dehning, B.; Effinger, E.; Nordt, A.; Sapinski, M. G.; Zamantzas, C.

    2010-12-01

    The reliability concerns have driven the design of the Large Hardron Collider (LHC) Beam Loss Monitoring (BLM) system from the early stage of the studies up to the present commissioning and the latest development of diagnostic tools. To protect the system against non-conformities, new ways of automatic checking have been developed and implemented. These checks are regularly and systematically executed by the LHC operation team to ensure that the system status is after each test "as good as new". The sanity checks are part of this strategy. They are testing the electrical part of the detectors (ionisation chamber or secondary emission detector), their cable connections to the front-end electronics, further connections to the back-end electronics and their ability to request a beam abort. During the installation and in the early commissioning phase, these checks have shown their ability to find also non-conformities caused by unexpected failure event scenarios. In every day operation, a non-conformity discovered by this check inhibits any further injections into the LHC until the check confirms the absence of non-conformities.

  5. SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Zhang, L; Yang, J

    Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less

  6. Monitoring caustic injuries from emergency department databases using automatic keyword recognition software.

    PubMed

    Vignally, P; Fondi, G; Taggi, F; Pitidis, A

    2011-03-31

    In Italy the European Union Injury Database reports the involvement of chemical products in 0.9% of home and leisure accidents. The Emergency Department registry on domestic accidents in Italy and the Poison Control Centres record that 90% of cases of exposure to toxic substances occur in the home. It is not rare for the effects of chemical agents to be observed in hospitals, with a high potential risk of damage - the rate of this cause of hospital admission is double the domestic injury average. The aim of this study was to monitor the effects of injuries caused by caustic agents in Italy using automatic free-text recognition in Emergency Department medical databases. We created a Stata software program to automatically identify caustic or corrosive injury cases using an agent-specific list of keywords. We focused attention on the procedure's sensitivity and specificity. Ten hospitals in six regions of Italy participated in the study. The program identified 112 cases of injury by caustic or corrosive agents. Checking the cases by quality controls (based on manual reading of ED reports), we assessed 99 cases as true positive, i.e. 88.4% of the patients were automatically recognized by the software as being affected by caustic substances (99% CI: 80.6%- 96.2%), that is to say 0.59% (99% CI: 0.45%-0.76%) of the whole sample of home injuries, a value almost three times as high as that expected (p < 0.0001) from European codified information. False positives were 11.6% of the recognized cases (99% CI: 5.1%- 21.5%). Our automatic procedure for caustic agent identification proved to have excellent product recognition capacity with an acceptable level of excess sensitivity. Contrary to our a priori hypothesis, the automatic recognition system provided a level of identification of agents possessing caustic effects that was significantly much greater than was predictable on the basis of the values from current codifications reported in the European Database.

  7. Rainfall, Discharge, and Water-Quality Data During Stormwater Monitoring, July 1, 2007, to June 30, 2008; Halawa Stream Drainage Basin and the H-1 Storm Drain, Oahu, Hawaii

    USGS Publications Warehouse

    Presley, Todd K.; Jamison, Marcael T.J.; Young, Stacie T.M.

    2008-01-01

    Storm runoff water-quality samples were collected as part of the State of Hawaii Department of Transportation Stormwater Monitoring Program. The program is designed to assess the effects of highway runoff and urban runoff on Halawa Stream and to assess the effects from the H-1 storm drain on Manoa Stream. For this program, rainfall data were collected at three stations, continuous discharge data at four stations, and water-quality data at six stations, which include the four continuous discharge stations. This report summarizes rainfall, discharge, and water-quality data collected between July 1, 2007, and June 30, 2008. A total of 16 environmental samples were collected over two storms during July 1, 2007, to June 30, 2008, within the Halawa Stream drainage area. Samples were analyzed for total suspended solids, total dissolved solids, nutrients, chemical oxygen demand, and selected trace metals (cadmium, chromium, copper, lead, and zinc). Additionally, grab samples were analyzed for oil and grease, total petroleum hydrocarbons, fecal coliform, and biological oxygen demand. Some samples were analyzed for only a partial list of these analytes because an insufficient volume of sample was collected by the automatic samplers. Three additional quality-assurance/quality-control samples were collected concurrently with the storm samples. A total of 16 environmental samples were collected over four storms during July 1, 2007, to June 30, 2008 at the H-1 Storm Drain. All samples at this site were collected using an automatic sampler. Samples generally were analyzed for total suspended solids, nutrients, chemical oxygen demand, oil and grease, total petroleum hydrocarbons, and selected trace metals (cadmium, chromium, copper, lead, nickel, and zinc), although some samples were analyzed for only a partial list of these analytes. During the storm of January 29, 2008, 10 discrete samples were collected. Varying constituent concentrations were detected for the samples collected at different times during this storm event. Two quality-assurance/quality-control samples were collected concurrently with the storm samples. Three additional quality-assurance/quality-control samples were collected during routine sampler maintenance to check the effectiveness of equipment-cleaning procedures.

  8. Automatic detection of sweep-meshable volumes

    DOEpatents

    Tautges,; Timothy J. , White; David, R [Pittsburgh, PA

    2006-05-23

    A method of and software for automatically determining whether a mesh can be generated by sweeping for a representation of a geometric solid comprising: classifying surface mesh schemes for surfaces of the representation locally using surface vertex types; grouping mappable and submappable surfaces of the representation into chains; computing volume edge types for the representation; recursively traversing surfaces of the representation and grouping the surfaces into source, target, and linking surface lists; and checking traversal direction when traversing onto linking surfaces.

  9. Our Commitment to Reliable Health and Medical Information

    MedlinePlus

    ... 000 visitors world-wide per day. HONcode Toolbar: search engine and checker of the certification status Automatically checks ... HONcode status when browsing health web sites. The search engine indexes only HONcode-certified sites. HONcodeHunt currently includes ...

  10. 32 CFR 701.123 - PA fees.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... duplication for free. (1) DON activities shall waive fees automatically if the direct cost for reproduction of... made on a case-to-case basis. (c) PA fee deposits. Checks or money orders shall be made payable to the...

  11. 32 CFR 701.123 - PA fees.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... duplication for free. (1) DON activities shall waive fees automatically if the direct cost for reproduction of... made on a case-to-case basis. (c) PA fee deposits. Checks or money orders shall be made payable to the...

  12. 32 CFR 701.123 - PA fees.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... duplication for free. (1) DON activities shall waive fees automatically if the direct cost for reproduction of... made on a case-to-case basis. (c) PA fee deposits. Checks or money orders shall be made payable to the...

  13. 32 CFR 701.123 - PA fees.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... duplication for free. (1) DON activities shall waive fees automatically if the direct cost for reproduction of... made on a case-to-case basis. (c) PA fee deposits. Checks or money orders shall be made payable to the...

  14. 32 CFR 701.123 - PA fees.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... duplication for free. (1) DON activities shall waive fees automatically if the direct cost for reproduction of... made on a case-to-case basis. (c) PA fee deposits. Checks or money orders shall be made payable to the...

  15. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOEpatents

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  16. An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.

    PubMed

    Saccomani, Maria Pia

    2011-08-01

    Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.

  17. Research on Centralized Voltage and Effective Inequality Identification Based on Circuit Analysis Method

    NASA Astrophysics Data System (ADS)

    Su, Yi; Wang, Feifeng; Lu, Yufeng; Huang, Huimin; Xia, Xiaofei

    2017-09-01

    This paper is based on affine function equation of the grid and OPF problem, discusses the equivalent of some inequality constraints variables optimizing. Further, we propose the model of injection current and set up the constraint sensitivity index of affine characteristics. The index can be used to identify the central point voltage and effective inequality of the system automatically. And then we can know how to compensate reactive power of the corresponding generator node and control the voltage to ensure the quality of the system voltage. When checking the effective inequalities we introduce cross-solving method of power flow. This provide a different idea for solving the power flow. The paper uses the results of the IEEE5 node examples to illustrate the validity and practicality of the proposed method.

  18. Authoring and verification of clinical guidelines: a model driven approach.

    PubMed

    Pérez, Beatriz; Porres, Ivan

    2010-08-01

    The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Hearing-aid tester

    NASA Technical Reports Server (NTRS)

    Kessinger, R.; Polhemus, J. T.; Waring, J. G.

    1977-01-01

    Hearing aids are automatically checked by circuit that applies half-second test signal every thirty minutes. If hearing-aid output is distorted, too small, or if battery is too low, a warning lamp is activated. Test circuit is incorporated directly into hearing-aid package.

  20. Secure Distributed Human Computation

    NASA Astrophysics Data System (ADS)

    Gentry, Craig; Ramzan, Zulfikar; Stubblebine, Stuart

    In Peha’s Financial Cryptography 2004 invited talk, he described the Cyphermint PayCash system (see www.cyphermint.com), which allows people without bank accounts or credit cards (a sizeable segment of the U.S. population) to automatically and instantly cash checks, pay bills, or make Internet transactions through publicly-accessible kiosks. Since PayCash offers automated financial transactions and since the system uses (unprotected) kiosks, security is critical. The kiosk must decide whether a person cashing a check is really the person to whom the check was made out, so it takes a digital picture of the person cashing the check and transmits this picture electronically to a central office, where a human worker compares the kiosk’s picture to one that was taken when the person registered with Cyphermint. If both pictures are of the same person, then the human worker authorizes the transaction.

  1. KnowLife: a versatile approach for constructing a large knowledge graph for biomedical sciences.

    PubMed

    Ernst, Patrick; Siu, Amy; Weikum, Gerhard

    2015-05-14

    Biomedical knowledge bases (KB's) have become important assets in life sciences. Prior work on KB construction has three major limitations. First, most biomedical KBs are manually built and curated, and cannot keep up with the rate at which new findings are published. Second, for automatic information extraction (IE), the text genre of choice has been scientific publications, neglecting sources like health portals and online communities. Third, most prior work on IE has focused on the molecular level or chemogenomics only, like protein-protein interactions or gene-drug relationships, or solely address highly specific topics such as drug effects. We address these three limitations by a versatile and scalable approach to automatic KB construction. Using a small number of seed facts for distant supervision of pattern-based extraction, we harvest a huge number of facts in an automated manner without requiring any explicit training. We extend previous techniques for pattern-based IE with confidence statistics, and we combine this recall-oriented stage with logical reasoning for consistency constraint checking to achieve high precision. To our knowledge, this is the first method that uses consistency checking for biomedical relations. Our approach can be easily extended to incorporate additional relations and constraints. We ran extensive experiments not only for scientific publications, but also for encyclopedic health portals and online communities, creating different KB's based on different configurations. We assess the size and quality of each KB, in terms of number of facts and precision. The best configured KB, KnowLife, contains more than 500,000 facts at a precision of 93% for 13 relations covering genes, organs, diseases, symptoms, treatments, as well as environmental and lifestyle risk factors. KnowLife is a large knowledge base for health and life sciences, automatically constructed from different Web sources. As a unique feature, KnowLife is harvested from different text genres such as scientific publications, health portals, and online communities. Thus, it has the potential to serve as one-stop portal for a wide range of relations and use cases. To showcase the breadth and usefulness, we make the KnowLife KB accessible through the health portal (http://knowlife.mpi-inf.mpg.de).

  2. The good, the bad and the dubious: VHELIBS, a validation helper for ligands and binding sites

    PubMed Central

    2013-01-01

    Background Many Protein Data Bank (PDB) users assume that the deposited structural models are of high quality but forget that these models are derived from the interpretation of experimental data. The accuracy of atom coordinates is not homogeneous between models or throughout the same model. To avoid basing a research project on a flawed model, we present a tool for assessing the quality of ligands and binding sites in crystallographic models from the PDB. Results The Validation HElper for LIgands and Binding Sites (VHELIBS) is software that aims to ease the validation of binding site and ligand coordinates for non-crystallographers (i.e., users with little or no crystallography knowledge). Using a convenient graphical user interface, it allows one to check how ligand and binding site coordinates fit to the electron density map. VHELIBS can use models from either the PDB or the PDB_REDO databank of re-refined and re-built crystallographic models. The user can specify threshold values for a series of properties related to the fit of coordinates to electron density (Real Space R, Real Space Correlation Coefficient and average occupancy are used by default). VHELIBS will automatically classify residues and ligands as Good, Dubious or Bad based on the specified limits. The user is also able to visually check the quality of the fit of residues and ligands to the electron density map and reclassify them if needed. Conclusions VHELIBS allows inexperienced users to examine the binding site and the ligand coordinates in relation to the experimental data. This is an important step to evaluate models for their fitness for drug discovery purposes such as structure-based pharmacophore development and protein-ligand docking experiments. PMID:23895374

  3. Data services providing by the Ukrainian NODC (MHI NASU)

    NASA Astrophysics Data System (ADS)

    Eremeev, V.; Godin, E.; Khaliulin, A.; Ingerov, A.; Zhuk, E.

    2009-04-01

    At modern stage of the World Ocean study information support of investigation based on ad-vanced computer technologies becomes of particular importance. These abstracts are devoted to presentation of several data services developed in the Ukrainian NODC on the base of the Ma-rine Environmental and Information Technologies Department of MHI NASU. The Data Quality Control Service Using experience of international collaboration in the field of data collection and quality check we have developed the quality control (QC) software providing both preliminary(automatic) and expert(manual) data quality check procedures. The current version of the QC software works for the Mediterranean and Black seas and includes the climatic arrays for hydrological and few hydrochemical parameters based on such products as MEDAR/MEDATLAS II, Physical Oceanography of the Black Sea and Climatic Atlas of Oxygen and Hydrogen Sulfide in the Black sea. The data quality check procedure includes metadata control and hydrological and hydrochemical data control. Metadata control provides checking of duplicate cruises and pro-files, date and chronology, ship velocity, station location, sea depth and observation depth. Data QC procedure includes climatic (or range for parameters with small number of observations) data QC, density inversion check for hydrological data and searching for spikes. Using of cli-matic fields and profiles prepared by regional oceanography experts leads to more reliable results of data quality check procedure. The Data Access Services The Ukrainian NODC provides two products for data access - on-line software and data access module for the MHI NASU local net. This software allows select-ing data on rectangle area, on date, on months, on cruises. The result of query is metadata which are presented in the table and the visual presentation of stations on the map. It is possible to see both metadata and data. For this purpose it is necessary to select station in the table of metadata or on the map. There is also an opportunity to export data in ODV format. The product is avail-able on http://www.ocean.nodc.org.ua/DataAccess.php The local net version provides access to the oceanological database of the MHI NASU. The cur-rent version allows selecting data by spatial and temporal limits, depth, values of parameters, quality flags and works for the Mediterranean and Black seas. It provides visualization of meta-data and data, statistics of data selection, data export into several data formats. The Operational Data Management Services The collaborators of the MHI Experimental Branch developed a system of obtaining information on water pressure and temperature, as well as on atmospheric pressure. Sea level observations are also conducted. The obtained data are transferred online. The interface for operation data access was developed. It allows to select parameters (sea level, water temperature, atmospheric pressure, wind and wa-ter pressure) and time interval to see parameter graphics. The product is available on http://www.ocean.nodc.org.ua/Katsively.php . The Climatic products The current version of the Climatic Atlas includes maps on such pa-rameters as temperature, salinity, density, heat storage, dynamic heights, upper boundary of hy-drogen sulfide and lower boundary of oxygen for the Black sea basin. Maps for temperature, sa-linity, density were calculated on 19 standard depths and averaged monthly for depths 0 - 300 m and annually for lower depth values. The climatic maps of upper boundary of hydrogen sulfide and lower boundary of oxygen were averaged by decades from 20 till 90 of the XX century and by seasons. Two versions of climatic atlas viewer - on-line and desktop for presentation of the climatic maps were developed. They provide similar functions of selection and viewing maps by parameter, month and depth and saving maps in various formats. On-line version of atlas is available on http://www.ocean.nodc.org.ua/Main_Atlas.php .

  4. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR FORM QA/QC CHECKS (UA-C-2.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the process of Field Quality Assurance and Quality Control checks. This procedure was followed to ensure consistent data retrieval during the Arizona NHEXAS project and the "Border" study. Keywords: custody; QA/QC; field checks.

    The Nation...

  5. Quality control system preparation for photogrammetric and laser scanning missions of Spanish national plan of aerial orthophotogpaphy (PNOA). (Polish Title: Opracowanie systemu kontroli jakości realizacji nalotów fotogrametrycznych i skaningowych dla hiszpańskiego narodowego planu ortofotomapy lotniczej (PNOA))

    NASA Astrophysics Data System (ADS)

    Rzonca, A.

    2013-12-01

    The paper presents the state of the art of quality control of photogrammetric and laser scanning data captured by airborne sensors. The described subject is very important for photogrammetric and LiDAR project execution, because the data quality a prior decides about the final product quality. On the other hand, precise and effective quality control process allows to execute the missions without wide margin of safety, especially in case of the mountain areas projects. For introduction, the author presents theoretical background of the quality control, basing on his own experience, instructions and technical documentation. He describes several variants of organization solutions. Basically, there are two main approaches: quality control of the captured data and the control of discrepancies of the flight plan and its results of its execution. Both of them are able to use test of control and analysis of the data. The test is an automatic algorithm controlling the data and generating the control report. Analysis is a less complicated process, that is based on documentation, data and metadata manual check. The example of quality control system for large area project was presented. The project is being realized periodically for the territory of all Spain and named National Plan of Aerial Orthophotography (Plan Nacional de Ortofotografía Aérea, PNOA). The system of the internal control guarantees its results soon after the flight and informs the flight team of the company. It allows to correct all the errors shortly after the flight and it might stop transferring the data to another team or company, for further data processing. The described system of data quality control contains geometrical and radiometrical control of photogrammetric data and geometrical control of LiDAR data. According to all specified parameters, it checks all of them and generates the reports. They are very helpful in case of some errors or low quality data. The paper includes the author experience in the field of data quality control, presents the conclusions and suggestions of the organization and technical aspects, with a short definition of the necessary control software.

  6. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    NASA Astrophysics Data System (ADS)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  7. Towards the Real-Time Evaluation of Collaborative Activities: Integration of an Automatic Rater of Collaboration Quality in the Classroom from the Teacher's Perspective

    ERIC Educational Resources Information Center

    Chounta, Irene-Angelica; Avouris, Nikolaos

    2016-01-01

    This paper presents the integration of a real time evaluation method of collaboration quality in a monitoring application that supports teachers in class orchestration. The method is implemented as an automatic rater of collaboration quality and studied in a real time scenario of use. We argue that automatic and semi-automatic methods which…

  8. Thinking inside the (lock)box: using banking technology to improve the revenue cycle.

    PubMed

    D'Eramo, Michael; Umbreit, Lynda

    2005-08-01

    An integrated, image-based lockbox solution has allowed Columbus, Ohio-based MaternOhio to automate payment posting, reconcilement, and billing; store check and remittance images electronically; and automatically update its in-house patient accounting system and medical records.

  9. Automated Verification of Specifications with Typestates and Access Permissions

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Catano, Nestor

    2011-01-01

    We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).

  10. SIMPATIQCO: a server-based software suite which facilitates monitoring the time course of LC-MS performance metrics on Orbitrap instruments.

    PubMed

    Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl

    2012-11-02

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.

  11. SIMPATIQCO: A Server-Based Software Suite Which Facilitates Monitoring the Time Course of LC–MS Performance Metrics on Orbitrap Instruments

    PubMed Central

    2012-01-01

    While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386

  12. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  13. Automated Sequence Processor: Something Old, Something New

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry

    2012-01-01

    High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..

  14. A distinguishing method of printed and handwritten legal amount on Chinese bank check

    NASA Astrophysics Data System (ADS)

    Zhu, Ningbo; Lou, Zhen; Yang, Jingyu

    2003-09-01

    While carrying out Optical Chinese Character Recognition, distinguishing the font between printed and handwritten characters at the early phase is necessary, because there is so much difference between the methods on recognizing these two types of characters. In this paper, we proposed a good method on how to banish seals and its relative standards that can judge whether they should be banished. Meanwhile, an approach on clearing up scattered noise shivers after image segmentation is presented. Four sets of classifying features that show discrimination between printed and handwritten characters are well adopted. The proposed approach was applied to an automatic check processing system and tested on about 9031 checks. The recognition rate is more than 99.5%.

  15. Communication Satellite Payload Special Check out Equipment (SCOE) for Satellite Testing

    NASA Astrophysics Data System (ADS)

    Subhani, Noman

    2016-07-01

    This paper presents Payload Special Check out Equipment (SCOE) for the test and measurement of communication satellite Payload at subsystem and system level. The main emphasis of this paper is to demonstrate the principle test equipment, instruments and the payload test matrix for an automatic test control. Electrical Ground Support Equipment (EGSE)/ Special Check out Equipment (SCOE) requirements, functions and architecture for C-band and Ku-band payloads are presented in details along with their interface with satellite during different phases of satellite testing. It provides test setup, in a single rack cabinet that can easily be moved from payload assembly and integration environment to thermal vacuum chamber all the way to launch site (for pre-launch test and verification).

  16. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  17. Rainfall, Streamflow, and Water-Quality Data During Stormwater Monitoring, Halawa Stream Drainage Basin, Oahu, Hawaii, July 1, 2003 to June 30, 2004

    USGS Publications Warehouse

    Young, Stacie T.M.; Ball, Marcael T.J.

    2004-01-01

    Storm runoff water-quality samples were collected as part of the State of Hawaii Department of Transportation Stormwater Monitoring Program. This program is designed to assess the effects of highway runoff and urban runoff on Halawa Stream. For this program, rainfall data were collected at two sites, continuous streamflow data at three sites, and water-quality data at five sites, which include the three streamflow sites. This report summarizes rainfall, streamflow, and water-quality data collected between July 1, 2003 and June 30, 2004. A total of 30 samples was collected over four storms during July 1, 2003 to June 30, 2004. In general, an attempt was made to collect grab samples nearly simultaneously at all five sites, and flow-weighted time-composite samples were collected at the three sites equipped with automatic samplers. However, all four storms were partially sampled because either not all stations were sampled or only grab samples were collected. Samples were analyzed for total suspended solids, total dissolved solids, nutrients, chemical oxygen demand, and selected trace metals (cadmium, copper, lead, and zinc). Grab samples were additionally analyzed for oil and grease, total petroleum hydrocarbons, fecal coliform, and biological oxygen demand. Quality-assurance/quality-control samples, collected during storms and during routine maintenance, were also collected to verify analytical procedures and check the effectiveness of equipment-cleaning procedures.

  18. An efficient visualization method for analyzing biometric data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay

    2013-05-01

    We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.

  19. Sampling theory and automated simulations for vertical sections, applied to human brain.

    PubMed

    Cruz-Orive, L M; Gelšvartas, J; Roberts, N

    2014-02-01

    In recent years, there have been substantial developments in both magnetic resonance imaging techniques and automatic image analysis software. The purpose of this paper is to develop stereological image sampling theory (i.e. unbiased sampling rules) that can be used by image analysts for estimating geometric quantities such as surface area and volume, and to illustrate its implementation. The methods will ideally be applied automatically on segmented, properly sampled 2D images - although convenient manual application is always an option - and they are of wide applicability in many disciplines. In particular, the vertical sections design to estimate surface area is described in detail and applied to estimate the area of the pial surface and of the boundary between cortex and underlying white matter (i.e. subcortical surface area). For completeness, cortical volume and mean cortical thickness are also estimated. The aforementioned surfaces were triangulated in 3D with the aid of FreeSurfer software, which provided accurate surface area measures that served as gold standards. Furthermore, a software was developed to produce digitized trace curves of the triangulated target surfaces automatically from virtual sections. From such traces, a new method (called the 'lambda method') is presented to estimate surface area automatically. In addition, with the new software, intersections could be counted automatically between the relevant surface traces and a cycloid test grid for the classical design. This capability, together with the aforementioned gold standard, enabled us to thoroughly check the performance and the variability of the different estimators by Monte Carlo simulations for studying the human brain. In particular, new methods are offered to split the total error variance into the orientations, sectioning and cycloid components. The latter prediction was hitherto unavailable--one is proposed here and checked by way of simulations on a given set of digitized vertical sections with automatically superimposed cycloid grids of three different sizes. Concrete and detailed recommendations are given to implement the methods. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  20. 21 CFR 211.68 - Automatic, mechanical, and electronic equipment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS... satisfactorily, may be used in the manufacture, processing, packing, and holding of a drug product. If such... designed to assure proper performance. Written records of those calibration checks and inspections shall be...

  1. Automated software system for checking the structure and format of ACM SIG documents

    NASA Astrophysics Data System (ADS)

    Mirza, Arsalan Rahman; Sah, Melike

    2017-04-01

    Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.

  2. Development of an analysis tool for cloud base height and visibility

    NASA Astrophysics Data System (ADS)

    Umdasch, Sarah; Reinhold, Steinacker; Manfred, Dorninger; Markus, Kerschbaum; Wolfgang, Pöttschacher

    2014-05-01

    The meteorological variables cloud base height (CBH) and horizontal atmospheric visibility (VIS) at surface level are of vital importance for safety and effectiveness in aviation. Around 20% of all civil aviation accidents in the USA from 2003 to 2007 were due to weather related causes, around 18% of which were owing to decreased visibility or ceiling (main CBH). The aim of this study is to develop a system generating quality-controlled gridded analyses of the two parameters based on the integration of various kinds of observational data. Upon completion, the tool is planned to provide guidance for nowcasting during take-off and landing as well as for flights operated under visual flight rules. Primary input data consists of manual as well as instrumental observation of CBH and VIS. In Austria, restructuring of part of the standard meteorological stations from human observation to automatic measurement of VIS and CBH is currently in progress. As ancillary data, satellite derived products can add 2-dimensional information, e.g. Cloud Type by NWC SAF (Nowcasting Satellite Application Facilities) MSG (Meteosat Second Generation). Other useful available data are meteorological surface measurements (in particular of temperature, humidity, wind and precipitation), radiosonde, radar and high resolution topography data. A one-year data set is used to study the spatial and weather-dependent representativeness of the CBH and VIS measurements. The VERA (Vienna Enhanced Resolution Analysis) system of the Institute of Meteorology and Geophysics of the University of Vienna provides the framework for the analysis development. Its integrated "Fingerprint" technique allows the insertion of empirical prior knowledge and ancillary information in the form of spatial patterns. Prior to the analysis, a quality control of input data is performed. For CBH and VIS, quality control can consist of internal consistency checks between different data sources. The possibility of two-dimensional consistency checks has to be explored. First results in the development of quality control features and fingerprints will be shown.

  3. Automatic retinal interest evaluation system (ARIES).

    PubMed

    Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang

    2014-01-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.

  4. Generation and use of observational data patterns in the evaluation of data quality for AmeriFlux and FLUXNET

    NASA Astrophysics Data System (ADS)

    Pastorello, G.; Agarwal, D.; Poindexter, C.; Papale, D.; Trotta, C.; Ribeca, A.; Canfora, E.; Faybishenko, B.; Gunter, D.; Chu, H.

    2015-12-01

    The fluxes-measuring sites that are part of AmeriFlux are operated and maintained in a fairly independent fashion, both in terms of scientific goals and operational practices. This is also the case for most sites from other networks in FLUXNET. This independence leads to a degree of heterogeneity in the data sets collected at the sites, which is also reflected in data quality levels. The generation of derived data products and data synthesis efforts, two of the main goals of these networks, are directly affected by the heterogeneity in data quality. In a collaborative effort between AmeriFlux and ICOS, a series of quality checks are being conducted for the data sets before any network-level data processing and product generation take place. From these checks, a set of common data issues were identified, and are being cataloged and classified into data quality patterns. These patterns are now being used as a basis for implementing automation for certain data quality checks, speeding up the process of applying the checks and evaluating the data. Currently, most data checks are performed individually in each data set, requiring visual inspection and inputs from a data curator. This manual process makes it difficult to scale the quality checks, creating a bottleneck for the data processing. One goal of the automated checks is to free up time of data curators so they can focus on new or less common issues. As new issues are identified, they can also be cataloged and classified, extending the coverage of existing patterns or potentially generating new patterns, helping both improve existing automated checks and create new ones. This approach is helping make data quality evaluation faster, more systematic, and reproducible. Furthermore, these patterns are also helping with documenting common causes and solutions for data problems. This can help tower teams with diagnosing problems in data collection and processing, and also in correcting historical data sets. In this presentation, using AmeriFlux fluxes and micrometeorological data, we discuss our approach to creating observational data patterns, and how we are using them to implement new automated checks. We also detail examples of these observational data patterns, illustrating how they are being used.

  5. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR FORM QA AND QC CHECKS (UA-C-2.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the process of field quality assurance and quality control checks. This procedure was followed to ensure consistent data retrieval during the Arizona NHEXAS project and the Border study. Keywords: custody; QA/QC; field checks.

    The U.S.-Mex...

  7. The charging security study of electric vehicle charging spot based on automatic testing platform

    NASA Astrophysics Data System (ADS)

    Li, Yulan; Yang, Zhangli; Zhu, Bin; Ran, Shengyi

    2018-03-01

    With the increasing of charging spots, the testing of charging security and interoperability becomes more and more urgent and important. In this paper, an interface simulator for ac charging test is designed, the automatic testing platform for electric vehicle charging spots is set up and used to test and analyze the abnormal state during the charging process. On the platform, the charging security and interoperability of ac charging spots and IC-CPD can be checked efficiently, the test report can be generated automatically with No artificial reading error. From the test results, the main reason why the charging spot is not qualified is that the power supply cannot be cut off in the prescribed time when the charging anomaly occurs.

  8. 49 CFR 195.450 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... remote control valve as follows: (1) Check valve means a valve that permits fluid to flow freely in one direction and contains a mechanism to automatically prevent flow in the other direction. (2) Remote control.... The RCV is usually operated by the supervisory control and data acquisition (SCADA) system. The...

  9. 49 CFR 195.450 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... remote control valve as follows: (1) Check valve means a valve that permits fluid to flow freely in one direction and contains a mechanism to automatically prevent flow in the other direction. (2) Remote control.... The RCV is usually operated by the supervisory control and data acquisition (SCADA) system. The...

  10. 49 CFR 195.450 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... remote control valve as follows: (1) Check valve means a valve that permits fluid to flow freely in one direction and contains a mechanism to automatically prevent flow in the other direction. (2) Remote control.... The RCV is usually operated by the supervisory control and data acquisition (SCADA) system. The...

  11. 49 CFR 195.450 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... remote control valve as follows: (1) Check valve means a valve that permits fluid to flow freely in one direction and contains a mechanism to automatically prevent flow in the other direction. (2) Remote control.... The RCV is usually operated by the supervisory control and data acquisition (SCADA) system. The...

  12. 49 CFR 195.450 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... remote control valve as follows: (1) Check valve means a valve that permits fluid to flow freely in one direction and contains a mechanism to automatically prevent flow in the other direction. (2) Remote control.... The RCV is usually operated by the supervisory control and data acquisition (SCADA) system. The...

  13. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed. PMID:27415814

  14. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.

  15. Data-driven management using quantitative metric and automatic auditing program (QMAP) improves consistency of radiation oncology processes.

    PubMed

    Yu, Naichang; Xia, Ping; Mastroianni, Anthony; Kolar, Matthew D; Chao, Samuel T; Greskovich, John F; Suh, John H

    Process consistency in planning and delivery of radiation therapy is essential to maintain patient safety and treatment quality and efficiency. Ensuring the timely completion of each critical clinical task is one aspect of process consistency. The purpose of this work is to report our experience in implementing a quantitative metric and automatic auditing program (QMAP) with a goal of improving the timely completion of critical clinical tasks. Based on our clinical electronic medical records system, we developed a software program to automatically capture the completion timestamp of each critical clinical task while providing frequent alerts of potential delinquency. These alerts were directed to designated triage teams within a time window that would offer an opportunity to mitigate the potential for late completion. Since July 2011, 18 metrics were introduced in our clinical workflow. We compared the delinquency rates for 4 selected metrics before the implementation of the metric with the delinquency rate of 2016. One-tailed Student t test was used for statistical analysis RESULTS: With an average of 150 daily patients on treatment at our main campus, the late treatment plan completion rate and late weekly physics check were reduced from 18.2% and 8.9% in 2011 to 4.2% and 0.1% in 2016, respectively (P < .01). The late weekly on-treatment physician visit rate was reduced from 7.2% in 2012 to <1.6% in 2016. The yearly late cone beam computed tomography review rate was reduced from 1.6% in 2011 to <0.1% in 2016. QMAP is effective in reducing late completions of critical tasks, which can positively impact treatment quality and patient safety by reducing the potential for errors resulting from distractions, interruptions, and rush in completion of critical tasks. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  16. Double checking medicines: defence against error or contributory factor?

    PubMed

    Armitage, Gerry

    2008-08-01

    The double checking of medicines in health care is a contestable procedure. It occupies an obvious position in health care practice and is understood to be an effective defence against medication error but the process is variable and the outcomes have not been exposed to testing. This paper presents an appraisal of the process using data from part of a larger study on the contributory factors in medication errors and their reporting. Previous research studies are reviewed; data are analysed from a review of 991 drug error reports and a subsequent series of 40 in-depth interviews with health professionals in an acute hospital in northern England. The incident reports showed that errors occurred despite double checking but that action taken did not appear to investigate the checking process. Most interview participants (34) talked extensively about double checking but believed the process to be inconsistent. Four key categories were apparent: deference to authority, reduction of responsibility, automatic processing and lack of time. Solutions to the problems were also offered, which are discussed with several recommendations. Double checking medicines should be a selective and systematic procedure informed by key principles and encompassing certain behaviours. Psychological research may be instructive in reducing checking errors but the aviation industry may also have a part to play in increasing error wisdom and reducing risk.

  17. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    PubMed

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Slicing AADL Specifications for Model Checking

    NASA Technical Reports Server (NTRS)

    Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas

    2010-01-01

    To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.

  19. A Discussion of Issues in Integrity Constraint Monitoring

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.

    1998-01-01

    In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.

  20. 40 CFR 51.363 - Quality assurance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test, the evaporative system tests, and emission control component checks (as applicable); (vi...) A check of the Constant Volume Sampler flow calibration; (5) A check for the optimization of the... selection, and power absorption; (9) A check of the system's ability to accurately detect background...

  1. Temporal Specification and Verification of Real-Time Systems.

    DTIC Science & Technology

    1991-08-30

    of concrete real - time systems can be modeled adequately. Specification: We present two conservative extensions of temporal logic that allow for the...logic. We present both model-checking algorithms for the automatic verification of finite-state real - time systems and proof methods for the deductive verification of real - time systems .

  2. Teaching Statistics with Minitab II.

    ERIC Educational Resources Information Center

    Ryan, T. A., Jr.; And Others

    Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…

  3. Fault detection monitor circuit provides ''self-heal capability'' in electronic modules - A concept

    NASA Technical Reports Server (NTRS)

    Kennedy, J. J.

    1970-01-01

    Self-checking technique detects defective solid state modules used in electronic test and checkout instrumentation. A ten bit register provides failure monitor and indication for 1023 comparator circuits, and the automatic fault-isolation capability permits the electronic subsystems to be repaired by replacing the defective module.

  4. Data quality assessment for comparative effectiveness research in distributed data networks

    PubMed Central

    Brown, Jeffrey; Kahn, Michael; Toh, Sengwee

    2015-01-01

    Background Electronic health information routinely collected during healthcare delivery and reimbursement can help address the need for evidence about the real-world effectiveness, safety, and quality of medical care. Often, distributed networks that combine information from multiple sources are needed to generate this real-world evidence. Objective We provide a set of field-tested best practices and a set of recommendations for data quality checking for comparative effectiveness research (CER) in distributed data networks. Methods Explore the requirements for data quality checking and describe data quality approaches undertaken by several existing multi-site networks. Results There are no established standards regarding how to evaluate the quality of electronic health data for CER within distributed networks. Data checks of increasing complexity are often employed, ranging from consistency with syntactic rules to evaluation of semantics and consistency within and across sites. Temporal trends within and across sites are widely used, as are checks of each data refresh or update. Rates of specific events and exposures by age group, sex, and month are also common. Discussion Secondary use of electronic health data for CER holds promise but is complex, especially in distributed data networks that incorporate periodic data refreshes. The viability of a learning health system is dependent on a robust understanding of the quality, validity, and optimal secondary uses of routinely collected electronic health data within distributed health data networks. Robust data quality checking can strengthen confidence in findings based on distributed data network. PMID:23793049

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Yang, F; Ford, E

    Purpose: The physics plan check verifies various aspects of a treatment plan after dosimetrists have finished creating the plan. Some errors in the plan which are caught by the physics check could be caught earlier in the departmental workflow. The purpose of this project was to evaluate a plan checking script that can be run within the treatment planning system (TPS) by the dosimetrists prior to plan approval and export to the record and verify system. Methods: A script was created in the Pinnacle TPS to automatically check 15 aspects of a plan for clinical practice conformity. The script outputsmore » a list of checks which the plan has passed and a list of checks which the plan has failed so that appropriate adjustments can be made. For this study, the script was run on a total of 108 plans: IMRT (46/108), VMAT (35/108) and SBRT (27/108). Results: Of the plans checked by the script, 77/108 (71%) failed at least one of the fifteen checks. IMRT plans resulted in more failed checks (91%) than VMAT (51%) or SBRT (63%), due to the high failure rate of an IMRT-specific check, which checks that no IMRT segment < 5 MU. The dose grid size and couch removal checks caught errors in 10% and 14% of all plans – errors that ultimately may have resulted in harm to the patient. Conclusion: Approximately three-fourths of the plans being examined contain errors that could be caught by dosimetrists running an automated script embedded in the TPS. The results of this study will improve the departmental workflow by cutting down on the number of plans that, due to these types of errors, necessitate re-planning and re-approval of plans, increase dosimetrist and physician workload and, in urgent cases, inconvenience patients by causing treatment delays.« less

  6. An E-health solution for automatic sleep classification according to Rechtschaffen and Kales: validation study of the Somnolyzer 24 x 7 utilizing the Siesta database.

    PubMed

    Anderer, Peter; Gruber, Georg; Parapatics, Silvia; Woertz, Michael; Miazhynskaia, Tatiana; Klosch, Gerhard; Saletu, Bernd; Zeitlhofer, Josef; Barbanoj, Manuel J; Danker-Hopfe, Heidi; Himanen, Sari-Leena; Kemp, Bob; Penzel, Thomas; Grozinger, Michael; Kunz, Dieter; Rappelsberger, Peter; Schlogl, Alois; Dorffner, Georg

    2005-01-01

    To date, the only standard for the classification of sleep-EEG recordings that has found worldwide acceptance are the rules published in 1968 by Rechtschaffen and Kales. Even though several attempts have been made to automate the classification process, so far no method has been published that has proven its validity in a study including a sufficiently large number of controls and patients of all adult age ranges. The present paper describes the development and optimization of an automatic classification system that is based on one central EEG channel, two EOG channels and one chin EMG channel. It adheres to the decision rules for visual scoring as closely as possible and includes a structured quality control procedure by a human expert. The final system (Somnolyzer 24 x 7) consists of a raw data quality check, a feature extraction algorithm (density and intensity of sleep/wake-related patterns such as sleep spindles, delta waves, SEMs and REMs), a feature matrix plausibility check, a classifier designed as an expert system, a rule-based smoothing procedure for the start and the end of stages REM, and finally a statistical comparison to age- and sex-matched normal healthy controls (Siesta Spot Report). The expert system considers different prior probabilities of stage changes depending on the preceding sleep stage, the occurrence of a movement arousal and the position of the epoch within the NREM/REM sleep cycles. Moreover, results obtained with and without using the chin EMG signal are combined. The Siesta polysomnographic database (590 recordings in both normal healthy subjects aged 20-95 years and patients suffering from organic or nonorganic sleep disorders) was split into two halves, which were randomly assigned to a training and a validation set, respectively. The final validation revealed an overall epoch-by-epoch agreement of 80% (Cohen's kappa: 0.72) between the Somnolyzer 24 x 7 and the human expert scoring, as compared with an inter-rater reliability of 77% (Cohen's kappa: 0.68) between two human experts scoring the same dataset. Two Somnolyzer 24 x 7 analyses (including a structured quality control by two human experts) revealed an inter-rater reliability close to 1 (Cohen's kappa: 0.991), which confirmed that the variability induced by the quality control procedure, whereby approximately 1% of the epochs (in 9.5% of the recordings) are changed, can definitely be neglected. Thus, the validation study proved the high reliability and validity of the Somnolyzer 24 x 7 and demonstrated its applicability in clinical routine and sleep studies.

  7. Stereotactic radiation treatment planning and follow-up studies involving fused multimodality imaging.

    PubMed

    Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P

    2004-11-01

    Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.

  8. Rainfall, Streamflow, and Water-Quality Data During Stormwater Monitoring, Halawa Stream Drainage Basin, Oahu, Hawaii, July 1, 2004 to June 30, 2005

    USGS Publications Warehouse

    Young, Stacie T.M.; Ball, Marcael T.J.

    2005-01-01

    Storm runoff water-quality samples were collected as part of the State of Hawaii Department of Transportation Stormwater Monitoring Program. This program is designed to assess the effects of highway runoff and urban runoff on Halawa Stream. For this program, rainfall data were collected at two stations, continuous streamflow data at two stations, and water-quality data at five stations, which include the two continuous streamflow stations. This report summarizes rainfall, streamflow, and water-quality data collected between July 1, 2004 and June 30, 2005. A total of 15 samples was collected over three storms during July 1, 2004 to June 30, 2005. In general, an attempt was made to collect grab samples nearly simultaneously at all five stations and flow-weighted time-composite samples at the three stations equipped with automatic samplers. However, all three storms were partially sampled because either not all stations were sampled or not all composite samples were collected. Samples were analyzed for total suspended solids, total dissolved solids, nutrients, chemical oxygen demand, and selected trace metals (cadmium, chromium, copper, lead, nickel, and zinc). Chromium and nickel were added to the analysis starting October 1, 2004. Grab samples were additionally analyzed for oil and grease, total petroleum hydrocarbons, fecal coliform, and biological oxygen demand. Quality-assurance/quality-control samples were also collected during storms and during routine maintenance to verify analytical procedures and check the effectiveness of equipment-cleaning procedures.

  9. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  10. Take the Reins on Model Quality with ModelCHECK and Gatekeeper

    NASA Technical Reports Server (NTRS)

    Jones, Corey

    2012-01-01

    Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.

  11. STS-34 onboard view of iodine comparator assembly used to check water quality

    NASA Technical Reports Server (NTRS)

    1989-01-01

    STS-34 closeup view taken onboard Atlantis, Orbiter Vehicle (OV) 104, is of the iodine comparator assembly. Potable water quality is checked by comparing the water color to the color chart on the surrounding board.

  12. NET-VISA, a Bayesian method next-generation automatic association software. Latest developments and operational assessment.

    NASA Astrophysics Data System (ADS)

    Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar

    2017-04-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process

  13. 77 FR 4070 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... all the option series have the same expiration.\\7\\ The functionality is designed to detect scenarios... designed to prevent incoming orders from automatically executing at potentially erroneous prices. These price check parameter features are designed to help maintain a fair and orderly market. The Exchange is...

  14. The Optimization of Automatically Generated Compilers.

    DTIC Science & Technology

    1987-01-01

    than their procedural counterparts, and are also easier to analyze for storage optimizations; (2) AGs can be algorithmically checked to be non-circular...Providing algorithms to move the storage for many attributes from the For structure tree into global stacks and variables. -Dd(2) Creating AEs which build and...54 3.5.2. Partitioning algorithm

  15. SWAT Check: A screening tool to assist users in the identification of potential model application problems

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) is a basin scale hydrologic model developed by the US Department of Agriculture-Agricultural Research Service. SWAT's broad applicability, user friendly model interfaces, and automatic calibration software have led to a rapid increase in the number of new u...

  16. Optical Detection Of Cryogenic Leaks

    NASA Technical Reports Server (NTRS)

    Wyett, Lynn M.

    1988-01-01

    Conceptual system identifies leakage without requiring shutdown for testing. Proposed device detects and indicates leaks of cryogenic liquids automatically. Detector makes it unnecessary to shut equipment down so it can be checked for leakage by soap-bubble or helium-detection methods. Not necessary to mix special gases or other materials with cryogenic liquid flowing through equipment.

  17. Timing for Athletics at the Olympic Games

    ERIC Educational Resources Information Center

    Marshall, Steve

    2012-01-01

    Video photography of races is now routinely used in international running events to provide automatic recording of the position within each race achieved by the athletes, as well as the time taken in order to check whether records have been broken. This article describes how several cameras provide the evidence that is collated using computers and…

  18. Handbook for Bombardiers

    DTIC Science & Technology

    1944-02-08

    action of the levelling knobs. Cage gym . h. Test the automatic release through the bomb racks. i. Check telescope motor. (1) Týrn telescope motor switch...154.2 11O00 202.6 20000 150.2 12000 193.9 FIGURE 26. 00 * RESTRICTED 4 4 1 .••• REST.R ICTED -.71- SECTION II HIGH ALTITJDE BOMBING OF MANEJVERING

  19. Controlling state explosion during automatic verification of delay-insensitive and delay-constrained VLSI systems using the POM verifier

    NASA Technical Reports Server (NTRS)

    Probst, D.; Jensen, L.

    1991-01-01

    Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.

  20. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    NASA Technical Reports Server (NTRS)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  1. On-line multiple component analysis for efficient quantitative bioprocess development.

    PubMed

    Dietzsch, Christian; Spadiut, Oliver; Herwig, Christoph

    2013-02-20

    On-line monitoring devices for the precise determination of a multitude of components are a prerequisite for fast bioprocess quantification. On-line measured values have to be checked for quality and consistency, in order to extract quantitative information from these data. In the present study we characterized a novel on-line sampling and analysis device comprising an automatic photometric robot. We connected this on-line device to a bioreactor and concomitantly measured six components (i.e. glucose, glycerol, ethanol, acetate, phosphate and ammonium) during different batch cultivations of Pichia pastoris. The on-line measured data did not show significant deviations from off-line taken samples and were consequently used for incremental rate and yield calculations. In this respect we highlighted the importance of data quality and discussed the phenomenon of error propagation. On-line calculated rates and yields depicted the physiological responses of the P. pastoris cells in unlimited and limited cultures. A more detailed analysis of the physiological state was possible by considering the off-line determined biomass dry weight and the calculation of specific rates. Here we present a novel device for on-line monitoring of bioprocesses, which ensures high data quality in real-time and therefore refers to a valuable tool for Process Analytical Technology (PAT). Copyright © 2012 Elsevier B.V. All rights reserved.

  2. 3D printing X-Ray Quality Control Phantoms. A Low Contrast Paradigm

    NASA Astrophysics Data System (ADS)

    Kapetanakis, I.; Fountos, G.; Michail, C.; Valais, I.; Kalyvas, N.

    2017-11-01

    Current 3D printing technology products may be usable in various biomedical applications. Such an application is the creation of X-ray quality control phantoms. In this work a self-assembled 3D printer (geeetech i3) was used for the design of a simple low contrast phantom. The printing material was Polylactic Acid (PLA) (100% printing density). Low contrast scheme was achieved by creating air-holes with different diameters and thicknesses, ranging from 1mm to 9mm. The phantom was irradiated at a Philips Diagnost 93 fluoroscopic installation at 40kV-70kV with the semi-automatic mode. The images were recorded with an Agfa cr30-x CR system and assessed with ImageJ software. The best contrast value observed was approximately 33%. In low contrast detectability check it was found that the 1mm diameter hole was always visible, for thickness larger or equal to 4mm. A reason for not being able to distinguish 1mm in smaller thicknesses might be the presence of printing patterns on the final image, which increased the structure noise. In conclusion the construction of a contrast resolution phantom with a 3D printer is feasible. The quality of the final product depends upon the printer accuracy and the material characteristics.

  3. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  4. Using chemical organization theory for model checking

    PubMed Central

    Kaleta, Christoph; Richter, Stephan; Dittrich, Peter

    2009-01-01

    Motivation: The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. Results: First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. Availability: All data and a JAVA applet to check SBML-models is available from http://www.minet.uni-jena.de/csb/prj/ot/tools Contact: dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19468053

  5. [Cleaning and disinfection in nursing homes. Data on quality of structure, process and outcome in nursing homes in Frankfurt am Main, Germany, 2011].

    PubMed

    Heudorf, U; Gasteyer, S; Samoiski, Y; Voigt, K

    2012-08-01

    Due to the Infectious Disease Prevention Act, public health services in Germany are obliged to check the infection prevention in hospitals and other medical facilities as well as in nursing homes. In Frankfurt/Main, Germany, standardized control visits have been performed for many years. In 2011 focus was laid on cleaning and disinfection of surfaces. All 41 nursing homes were checked according to a standardized checklist covering quality of structure (i.e. staffing, hygiene concept), quality of process (observation of the cleaning processes in the homes) and quality of output, which was monitored by checking the cleaning of fluorescent marks which had been applied some days before and should have been removed via cleaning in the following days before the final check. In more than two thirds of the homes, cleaning personnel were salaried, in one third external personnel were hired. Of the homes 85% provided service clothing and all of them offered protective clothing. All homes had established hygiene and cleaning concepts, however, in 15% of the homes concepts for the handling of Norovirus and in 30% concepts for the handling of Clostridium difficile were missing. Regarding process quality only half of the processes observed, i.e. cleaning of hand contact surfaces, such as handrails, washing areas and bins, were correct. Only 44% of the cleaning controls were correct with enormous differences between the homes (0-100%). The correlation between quality of process and quality of output was significant. There was good quality of structure in the homes but regarding quality of process and outcome there was great need for improvement. This was especially due to faults in communication and coordination between cleaning personnel and nursing personnel. Quality outcome was neither associated with the number of the places for residents nor with staffing. Thus, not only quality of structure but also quality of process and outcome should be checked by the public health services.

  6. Edited Synoptic Cloud Reports from Ships and Land Stations Over the Globe, 1982-1991 (NDP-026B)

    DOE Data Explorer

    Hahn, Carole J. [University of Arizona; Warren, Stephen G. [University of Washington; London, Julius [University of Colorado

    1996-01-01

    Surface synoptic weather reports for the entire globe for the 10-year period from December 1981 through November 1991 have been processed, edited, and rewritten to provide a data set designed for use in cloud analyses. The information in these reports relating to clouds, including the present weather information, was extracted and put through a series of quality control checks. Reports not meeting certain quality control standards were rejected, as were reports from buoys and automatic weather stations. Correctable inconsistencies within reports were edited for consistency, so that the "edited cloud report" can be used for cloud analysis without further quality checking. Cases of "sky obscured" were interpreted by reference to the present weather code as to whether they indicated fog, rain or snow and were given appropriate cloud type designations. Nimbostratus clouds, which are not specifically coded for in the standard synoptic code, were also given a special designation. Changes made to an original report are indicated in the edited report so that the original report can be reconstructed if desired. While low cloud amount is normally given directly in the synoptic report, the edited cloud report also includes the amounts, either directly reported or inferred, of middle and high clouds, both the non-overlapped amounts and the "actual" amounts (which may be overlapped). Since illumination from the moon is important for the adequate detection of clouds at night, both the relative lunar illuminance and the solar altitude are given, as well as a parameter that indicates whether our recommended illuminance criterion was satisfied. This data set contains 124 million reports from land stations and 15 million reports from ships. Each report is 56 characters in length. The archive consists of 240 files, one file for each month of data for land and ocean separately. With this data set a user can develop a climatology for any particular cloud type or group of types, for any geographical region and any spatial and temporal resolution desired.

  7. High-resolution urban observation network for user-specific meteorological information service in the Seoul Metropolitan Area, South Korea

    NASA Astrophysics Data System (ADS)

    Park, Moon-Soo; Park, Sung-Hwa; Chae, Jung-Hoon; Choi, Min-Hyeok; Song, Yunyoung; Kang, Minsoo; Roh, Joon-Woo

    2017-04-01

    To improve our knowledge of urban meteorology, including those processes applicable to high-resolution meteorological models in the Seoul Metropolitan Area (SMA), the Weather Information Service Engine (WISE) Urban Meteorological Observation System (UMS-Seoul) has been designed and installed. The UMS-Seoul incorporates 14 surface energy balance (EB) systems, 7 surface-based three-dimensional (3-D) meteorological observation systems and applied meteorological (AP) observation systems, and the existing surface-based meteorological observation network. The EB system consists of a radiation balance system, sonic anemometers, infrared CO2/H2O gas analyzers, and many sensors measuring the wind speed and direction, temperature and humidity, precipitation, and air pressure. The EB-produced radiation, meteorological, and turbulence data will be used to quantify the surface EB according to land use and to improve the boundary-layer and surface processes in meteorological models. The 3-D system, composed of a wind lidar, microwave radiometer, aerosol lidar, or ceilometer, produces the cloud height, vertical profiles of backscatter by aerosols, wind speed and direction, temperature, humidity, and liquid water content. It will be used for high-resolution reanalysis data based on observations and for the improvement of the boundary-layer, radiation, and microphysics processes in meteorological models. The AP system includes road weather information, mosquito activity, water quality, and agrometeorological observation instruments. The standardized metadata for networks and stations are documented and renewed periodically to provide a detailed observation environment. The UMS-Seoul data are designed to support real-time acquisition and display and automatically quality check within 10 min from observation. After the quality check, data can be distributed to relevant potential users such as researchers and policy makers. Finally, two case studies demonstrate that the observed data have a great potential to help to understand the boundary-layer structures more deeply, improve the performance of high-resolution meteorological models, and provide useful information customized based on the user demands in the SMA.

  8. SU-D-BRD-01: An Automated Physics Weekly Chart Checking System Supporting ARIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, X; Yang, D

    Purpose: A software tool was developed in this study to perform automatic weekly physics chart check on the patient data in ARIA. The tool accesses the electronic patient data directly from ARIA server and checks the accuracy of treatment deliveries, and generates reports which summarize the delivery history and highlight the errors. Methods: The tool has four modules. 1) The database interface is designed to directly access treatment delivery data from the ARIA database before reorganizing the data into the patient chart tree (PCT). 2) PCT is a core data structure designed to store and organize the data in logicalmore » hierarchies, and to be passed among functions. 3) The treatment data check module analyzes the organized data in PCT and stores the checking results into PCT. 4) Report generation module generates reports containing the treatment delivery summary, chart checking results and plots of daily treatment setup parameters (couch table positions, shifts of image guidance). The errors that are found by the tool are highlighted with colors. Results: The weekly check tool has been implemented in MATLAB and clinically tested at two major cancer centers. Javascript, cascading style sheets (CSS) and dynamic HTML were employed to create the user-interactive reports. It takes 0.06 second to search the delivery records of one beam with PCT and compare the delivery records with beam plan. The reports, saved in the HTML files on shared network folder, can be accessed by web browser on computers and mobile devices. Conclusion: The presented weekly check tool is useful to check the electronic patient treatment data in Varian ARIA system. It could be more efficient and reliable than the manually check by physicists. The work was partially supported by a research grant from Varian Medical System.« less

  9. SU-C-BRA-03: An Automated and Quick Contour Errordetection for Auto Segmentation in Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Ates, O; Li, X

    Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less

  10. A rigorous approach to self-checking programming

    NASA Technical Reports Server (NTRS)

    Hua, Kien A.; Abraham, Jacob A.

    1986-01-01

    Self-checking programming is shown to be an effective concurrent error detection technique. The reliability of a self-checking program however relies on the quality of its assertion statements. A self-checking program written without formal guidelines could provide a poor coverage of the errors. A constructive technique for self-checking programming is presented. A Structured Program Design Language (SPDL) suitable for self-checking software development is defined. A set of formal rules, was also developed, that allows the transfromation of SPDL designs into self-checking designs to be done in a systematic manner.

  11. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  12. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  13. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  14. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  15. 31 CFR 240.6 - Provisional credit; first examination; declination; final payment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... alteration without examining the original check or a better quality image of the check and Treasury is on... after the check is presented to a Federal Reserve Processing Center for payment, Treasury will be deemed...

  16. Quality control quantification (QCQ): a tool to measure the value of quality control checks in radiation oncology.

    PubMed

    Ford, Eric C; Terezakis, Stephanie; Souranis, Annette; Harris, Kendra; Gay, Hiram; Mutic, Sasa

    2012-11-01

    To quantify the error-detection effectiveness of commonly used quality control (QC) measures. We analyzed incidents from 2007-2010 logged into a voluntary in-house, electronic incident learning systems at 2 academic radiation oncology clinics. None of the incidents resulted in patient harm. Each incident was graded for potential severity using the French Nuclear Safety Authority scoring scale; high potential severity incidents (score >3) were considered, along with a subset of 30 randomly chosen low severity incidents. Each report was evaluated to identify which of 15 common QC checks could have detected it. The effectiveness was calculated, defined as the percentage of incidents that each QC measure could detect, both for individual QC checks and for combinations of checks. In total, 4407 incidents were reported, 292 of which had high-potential severity. High- and low-severity incidents were detectable by 4.0 ± 2.3 (mean ± SD) and 2.6 ± 1.4 QC checks, respectively (P<.001). All individual checks were less than 50% sensitive with the exception of pretreatment plan review by a physicist (63%). An effectiveness of 97% was achieved with 7 checks used in combination and was not further improved with more checks. The combination of checks with the highest effectiveness includes physics plan review, physician plan review, Electronic Portal Imaging Device-based in vivo portal dosimetry, radiation therapist timeout, weekly physics chart check, the use of checklists, port films, and source-to-skin distance checks. Some commonly used QC checks such as pretreatment intensity modulated radiation therapy QA do not substantially add to the ability to detect errors in these data. The effectiveness of QC measures in radiation oncology depends sensitively on which checks are used and in which combinations. A small percentage of errors cannot be detected by any of the standard formal QC checks currently in broad use, suggesting that further improvements are needed. These data require confirmation with a broader incident-reporting database. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Quality of data collected for severity of illness scores in the Dutch National Intensive Care Evaluation (NICE) registry.

    PubMed

    Arts, Daniëlle; de Keizer, Nicolette; Scheffer, Gert-Jan; de Jonge, Evert

    2002-05-01

    To analyse the quality of data used to measure severity of illness in the Dutch National Intensive Care Evaluation (NICE) registry, after implementation of quality improving procedures. Data were re-abstracted from the paper records of patients or the Patient Data Management System and compared to the data contained in the registry. The re-abstracted data were considered to be the gold standard. ICUs of nine Dutch hospitals that had been collecting data for the NICE registry for at least 1 year. The mean percentages of inaccurate and incomplete data, per hospital, over all variables, were 6.1%+/-4.4 (SD) and 2.7%+/-4.4 (SD), respectively. The mean difference in severity of illness scores between registry data and re-abstracted data was 0.2 points for APACHE II and 0.4 points for SAPS II. The mean difference in predicted mortality according to APACHE II and SAPS II between registry data and re-abstracted data was 0.4% and 0.02%, respectively. The current data quality of the NICE registry is good and justifies evaluative research. These positive results might be explained by the implementation of several quality assurance procedures in the NICE registry, such as training and automatic data checks. Electronic supplementary material to this paper can be obtained by using the Springer LINK server located at http://dx.doi.org/10.1007/s00134-002-1272-z

  18. Problem of data quality and the limitations of the infrastructure approach

    NASA Astrophysics Data System (ADS)

    Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong

    1998-07-01

    The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.

  19. pdb-care (PDB carbohydrate residue check): a program to support annotation of complex carbohydrate structures in PDB files.

    PubMed

    Lütteke, Thomas; von der Lieth, Claus-W

    2004-06-04

    Carbohydrates are involved in a variety of fundamental biological processes and pathological situations. They therefore have a large pharmaceutical and diagnostic potential. Knowledge of the 3D structure of glycans is a prerequisite for a complete understanding of their biological functions. The largest source of biomolecular 3D structures is the Protein Data Bank. However, about 30% of all 1663 PDB entries (version September 2003) containing carbohydrates comprise errors in glycan description. Unfortunately, no software is currently available which aligns the 3D information with the reported assignments. It is the aim of this work to fill this gap. The pdb-care program http://www.glycosciences.de/tools/pdb-care/ is able to identify and assign carbohydrate structures using only atom types and their 3D atom coordinates given in PDB-files. Looking up a translation table where systematic names and the respective PDB residue codes are listed, both assignments are compared and inconsistencies are reported. Additionally, the reliability of reported and calculated connectivities for molecules listed within the HETATOM records is checked and unusual values are reported. Frequent use of pdb-care will help to improve the quality of carbohydrate data contained in the PDB. Automatic assignment of carbohydrate structures contained in PDB entries will enable the cross-linking of glycobiology resources with genomic and proteomic data collections.

  20. Health search engine with e-document analysis for reliable search results.

    PubMed

    Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine

    2006-01-01

    After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.

  1. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    NASA Astrophysics Data System (ADS)

    D'Amico, G.; Amodeo, A.; Mattis, I.; Freudenthaler, V.; Pappalardo, G.

    2015-10-01

    In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  2. The relation of mechanical properties of wood and nosebar pressure in the production of veneer

    Treesearch

    Charles W. McMillin

    1958-01-01

    Observations of checking frequency, depth of check penetration, veneer thickness, and surface quality were made at 20 machining conditions. An inverse relationship between depth of check and frequency of checking was established. The effect of cutting temperature was demonstrated, and strength in compression perpendicular to the grain, tension perpendicular to the...

  3. 77 FR 4073 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-66207; File No. SR-CBOE-2012-004] Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and Immediate Effectiveness of Proposed Rule Change Related to Automatic Execution and Complex Order Price Check Parameter Features January 20, 2012. Pursuant to Sectio...

  4. Relationship of Source Selection Methods to Contract Outcomes: An Analysis of Air Force Source Selection

    DTIC Science & Technology

    2015-12-01

    however, solutions to these issues. A weighted mean can be used in place of the grand mean1 and the STATA software automatically handles the assignment of...covariance matrices between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not

  5. A Tire Air Maintenance Technology

    ERIC Educational Resources Information Center

    Pierce, Alan

    2012-01-01

    Improperly inflated car tires can reduce gas mileage and car performance, speed up tire wear, and even cause a tire to blow out. The AAA auto club recommends that someone check the air pressure of one's car's tires at least once a month. Wouldn't it be nice, though, if someone came up with a tire pressure-monitoring system that automatically kept…

  6. Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation

    DTIC Science & Technology

    2013-09-01

    are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual

  7. Spacecraft operations automation: Automatic alarm notification and web telemetry display

    NASA Astrophysics Data System (ADS)

    Short, Owen G.; Leonard, Robert E.; Bucher, Allen W.; Allen, Bryan

    1999-11-01

    In these times of Faster, Better, Cheaper (FBC) spacecraft, Spacecraft Operations Automation is an area that is targeted by many Operations Teams. To meet the challenges of the FBC environment, the Mars Global Surveyor (MGS) Operations Team designed and quickly implemented two new low-cost technologies: one which monitors spacecraft telemetry, checks the status of the telemetry, and contacts technical experts by pager when any telemetry datapoints exceed alarm limits, and a second which allows quick and convenient remote access to data displays. The first new technology is Automatic Alarm Notification (AAN). AAN monitors spacecraft telemetry and will notify engineers automatically if any telemetry is received which creates an alarm condition. The second new technology is Web Telemetry Display (WTD). WTD captures telemetry displays generated by the flight telemetry system and makes them available to the project web server. This allows engineers to check the health and status of the spacecraft from any computer capable of connecting to the global internet, without needing normally-required specialized hardware and software. Both of these technologies have greatly reduced operations costs by alleviating the need to have operations engineers monitor spacecraft performance on a 24 hour per day, 7 day per week basis from a central Mission Support Area. This paper gives details on the design and implementation of AAN and WTD, discusses their limitations, and lists the ongoing benefits which have accrued to MGS Flight Operations since their implementation in late 1996.

  8. Using knowledge for indexing health web resources in a quality-controlled gateway.

    PubMed

    Joubert, Michel; Darmoni, Stefan J; Avillach, Paul; Dahamna, Badisse; Fieschi, Marius

    2008-01-01

    The aim of this study is to provide to indexers MeSH terms to be considered as major ones in a list of terms automatically extracted from a document. We propose a method combining symbolic knowledge - the UMLS Metathesaurus and Semantic Network - and statistical knowledge drawn from co-occurrences of terms in the CISMeF database (a French-language quality-controlled health gateway) using data mining measures. The method was tested on CISMeF corpus of 293 resources. There was a proportion of 0.37+/-0.26 major terms in the processed records. The method produced lists of terms with a proportion of terms initially pointed out as major of 0.54+/-0.31. The method we propose reduces the number of terms, which seem not useful for content description of resources, such as "check tags", but retains the most descriptive ones. Discarding these terms is accounted for by: 1) the removal by using semantic knowledge of associations of concepts bearing no real medical significance, 2) the removal by using statistical knowledge of nonstatistically significant associations of terms. This method can assist effectively indexers in their daily work and will be soon applied in the CISMeF system.

  9. MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  10. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  11. A novel automatic full-scale inspecting system for banknote printing plates

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Feng, Li; Lu, Jibing; Qin, Qingwang; Liu, Liquan; Liu, Huina

    2018-01-01

    Quality assurance of banknote printing plates is an important issue for the corporation which produces them. Every plate must be checked carefully and entirely before it's sent to the banknote printing factory. Previously the work is done by specific workers, usually with the help of powder and magnifiers, and often lasts for 3 to 4 hours for a 5*7 plate with the size of about 650*500 square millimeters. Now we have developed an automatic inspecting system to replace human work. The system mainly includes a stable platform, an electrical subsystem and an inspecting subsystem. A microscope held by the crossbeam can move around in the x-y-z space over the platform. A digital camera combined with the microscope captures gray digital images of the plate. The size of each digital image is 2672*4008, and each pixel corresponds to about 2.9*2.9 square microns area of the plate. The plate is inspected by each unit, and corresponding images are captured at the same relative position. Thousands of images are captured for one plate (for example, 4200 (120*5*7) for a 5*7 plate). The inspecting model images are generated from images of qualified plates, and then used to inspect indeterminate plates. The system costs about 64 minutes to inspect a plate, and identifies obvious defects.

  12. Non-invasive quality evaluation of confluent cells by image-based orientation heterogeneity analysis.

    PubMed

    Sasaki, Kei; Sasaki, Hiroto; Takahashi, Atsuki; Kang, Siu; Yuasa, Tetsuya; Kato, Ryuji

    2016-02-01

    In recent years, cell and tissue therapy in regenerative medicine have advanced rapidly towards commercialization. However, conventional invasive cell quality assessment is incompatible with direct evaluation of the cells produced for such therapies, especially in the case of regenerative medicine products. Our group has demonstrated the potential of quantitative assessment of cell quality, using information obtained from cell images, for non-invasive real-time evaluation of regenerative medicine products. However, image of cells in the confluent state are often difficult to evaluate, because accurate recognition of cells is technically difficult and the morphological features of confluent cells are non-characteristic. To overcome these challenges, we developed a new image-processing algorithm, heterogeneity of orientation (H-Orient) processing, to describe the heterogeneous density of cells in the confluent state. In this algorithm, we introduced a Hessian calculation that converts pixel intensity data to orientation data and a statistical profiling calculation that evaluates the heterogeneity of orientations within an image, generating novel parameters that yield a quantitative profile of an image. Using such parameters, we tested the algorithm's performance in discriminating different qualities of cellular images with three types of clinically important cell quality check (QC) models: remaining lifespan check (QC1), manipulation error check (QC2), and differentiation potential check (QC3). Our results show that our orientation analysis algorithm could predict with high accuracy the outcomes of all types of cellular quality checks (>84% average accuracy with cross-validation). Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  13. Does flexibility in perceptual organization compete with automatic grouping?

    PubMed

    van Assche, Mitsouko; Gos, Pierre; Giersch, Anne

    2012-02-06

    Segregated objects can be sought simultaneously, i.e., mentally "re-grouped." Although the mechanisms underlying such "re-grouping" clearly differ from automatic grouping, it is unclear whether or not the end products of "re-grouping" and automatic grouping are the same. If they are, they would have similar impact on visual organization but would be in conflict. We compared the consequences of grouping and re-grouping on the performance cost induced by stimuli presented across hemifields. Two identical and contiguous target figures had to be identified within a display of circles and squares alternating around a fixation point. Eye tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. A large cost of presenting targets across hemifields was observed. Grouping by connectedness yielded two types of target pair, connected and unconnected. Subjects prioritized unconnected pairs efficiently when prompted to do so, suggesting "re-grouping." However, unlike automatic grouping, this did not affect the cost of across-hemifield presentation. The suggestion is that re-grouping yields different outputs to automatic grouping, such that a fresh representation resulting from re-grouping complements the one resulting from automatic grouping but does not replace it. This is one step toward understanding how our mental exploration of the world ties in and coexists with ongoing perception.

  14. Financial Record Checking in Surveys: Do Prompts Improve Data Quality?

    ERIC Educational Resources Information Center

    Murphy, Joe; Rosen, Jeffrey; Richards, Ashley; Riley, Sarah; Peytchev, Andy; Lindblad, Mark

    2016-01-01

    Self-reports of financial information in surveys, such as wealth, income, and assets, are particularly prone to inaccuracy. We sought to improve the quality of financial information captured in a survey conducted by phone and in person by encouraging respondents to check records when reporting on income and assets. We investigated whether…

  15. 40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...

  16. 40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...

  17. Verification of Java Programs using Symbolic Execution and Invariant Generation

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina; Visser, Willem

    2004-01-01

    Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.

  18. Perspectives of Cross-Correlation in Seismic Monitoring at the International Data Centre

    NASA Astrophysics Data System (ADS)

    Bobrov, Dmitry; Kitov, Ivan; Zerbo, Lassina

    2014-03-01

    We demonstrate that several techniques based on waveform cross-correlation are able to significantly reduce the detection threshold of seismic sources worldwide and to improve the reliability of arrivals by a more accurate estimation of their defining parameters. A master event and the events it can find using waveform cross-correlation at array stations of the International Monitoring System (IMS) have to be close. For the purposes of the International Data Centre (IDC), one can use the spatial closeness of the master and slave events in order to construct a new automatic processing pipeline: all qualified arrivals detected using cross-correlation are associated with events matching the current IDC event definition criteria (EDC) in a local association procedure. Considering the repeating character of global seismicity, more than 90 % of events in the reviewed event bulletin (REB) can be built in this automatic processing. Due to the reduced detection threshold, waveform cross-correlation may increase the number of valid REB events by a factor of 1.5-2.0. Therefore, the new pipeline may produce a more comprehensive bulletin than the current pipeline—the goal of seismic monitoring. The analysts' experience with the cross correlation event list (XSEL) shows that the workload of interactive processing might be reduced by a factor of two or even more. Since cross-correlation produces a comprehensive list of detections for a given master event, no additional arrivals from primary stations are expected to be associated with the XSEL events. The number of false alarms, relative to the number of events rejected from the standard event list 3 (SEL3) in the current interactive processing—can also be reduced by the use of several powerful filters. The principal filter is the difference between the arrival times of the master and newly built events at three or more primary stations, which should lie in a narrow range of a few seconds. In this study, one event at a distance of about 2,000 km from the main shock was formed by three stations, with the stations and both events on the same great circle. Such spurious events are rejected by checking consistency between detections at stations at different back azimuths from the source region. Two additional effective pre-filters are f-k analysis and F prob based on correlation traces instead of original waveforms. Overall, waveform cross-correlation is able to improve the REB completeness, to reduce the workload related to IDC interactive analysis, and to provide a precise tool for quality check for both arrivals and events. Some major improvements in automatic and interactive processing achieved by cross-correlation are illustrated using an aftershock sequence from a large continental earthquake. Exploring this sequence, we describe schematically the next steps for the development of a processing pipeline parallel to the existing IDC one in order to improve the quality of the REB together with the reduction of the magnitude threshold.

  19. Automatic Conflict Detection on Contracts

    NASA Astrophysics Data System (ADS)

    Fenech, Stephen; Pace, Gordon J.; Schneider, Gerardo

    Many software applications are based on collaborating, yet competing, agents or virtual organisations exchanging services. Contracts, expressing obligations, permissions and prohibitions of the different actors, can be used to protect the interests of the organisations engaged in such service exchange. However, the potentially dynamic composition of services with different contracts, and the combination of service contracts with local contracts can give rise to unexpected conflicts, exposing the need for automatic techniques for contract analysis. In this paper we look at automatic analysis techniques for contracts written in the contract language mathcal{CL}. We present a trace semantics of mathcal{CL} suitable for conflict analysis, and a decision procedure for detecting conflicts (together with its proof of soundness, completeness and termination). We also discuss its implementation and look into the applications of the contract analysis approach we present. These techniques are applied to a small case study of an airline check-in desk.

  20. Diagnosis - Using automatic test equipment and artificial intelligence expert systems

    NASA Astrophysics Data System (ADS)

    Ramsey, J. E., Jr.

    Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).

  1. The Classification and Evaluation of Computer-Aided Software Engineering Tools

    DTIC Science & Technology

    1990-09-01

    International Business Machines Corporation Customizer is a Registered Trademark of Index Technology Corporation Data Analyst is a Registered Trademark of...years, a rapid series of new approaches have been adopted including: information engineering, entity- relationship modeling, automatic code generation...support true information sharing among tools and automated consistency checking. Moreover, the repository must record and manage the relationships and

  2. Use of automatic extraction of LANDSAT data defining areas of ilmenite in the forest of the state of Pernambuco. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Mattoso, S. D. Q.; Paradella, W. R.; Meneses, P. R.

    1979-01-01

    The author has identified the following significant results. Classification results point out 600 alarm areas of high potentiality of titanium occurrence. Almost 80 of these 600 alarm areas were checked by field work, and in 56 of them, titanium occurrences were confirmed and four new ore deposits were found.

  3. Analysis of Source Selection Methods and Performance Outcomes: Lowest Price Technically Acceptable vs. Tradeoff in Air Force Acquisitions

    DTIC Science & Technology

    2015-12-01

    issues. A weighted mean can be used in place of the grand mean3 and the STATA software automatically handles the assignment of the sums of squares. Thus...between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and

  4. 49 CFR 571.122 - Standard No. 122; Motorcycle brake systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... transmission of signals in the motorcycle's ABS system. (b) To permit function checking, the warning lamp shall... CFR 571.101). S5.2Durability. S5.2.1Compensation for wear. Wear of the brakes shall be compensated for by means of a system of automatic or manual adjustment. S5.2.2Notice of wear. The friction material...

  5. 49 CFR 571.122 - Standard No. 122; Motorcycle brake systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission of signals in the motorcycle's ABS system. (b) To permit function checking, the warning lamp shall... CFR 571.101). S5.2Durability. S5.2.1Compensation for wear. Wear of the brakes shall be compensated for by means of a system of automatic or manual adjustment. S5.2.2Notice of wear. The friction material...

  6. Action-based verification of RTCP-nets with CADP

    NASA Astrophysics Data System (ADS)

    Biernacki, Jerzy; Biernacka, Agnieszka; Szpyrka, Marcin

    2015-12-01

    The paper presents an RTCP-nets' (real-time coloured Petri nets) coverability graphs into Aldebaran format translation algorithm. The approach provides the possibility of automatic RTCP-nets verification using model checking techniques provided by the CADP toolbox. An actual fire alarm control panel system has been modelled and several of its crucial properties have been verified to demonstrate the usability of the approach.

  7. Effects of Spell Checkers on English as a Second Language Students' Incidental Spelling Learning: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    Lin, Po-Han; Liu, Tzu-Chien; Paas, Fred

    2017-01-01

    Computer-based spell checkers help to correct misspells instantly. Almost all the word processing devices are now equipped with a spell-check function that either automatically corrects errors or provides a list of intended words. However, it is not clear on how the reliance on this convenient technological solution affects spelling learning.…

  8. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  9. Taxonomic classification of soils using digital information from LANDSAT data. Huayllamarca and eucaliptus areas. M.S. Thesis - Bolivia Univ.

    NASA Technical Reports Server (NTRS)

    Quiroga, S. Q.

    1977-01-01

    The applicability of LANDSAT digital information to soil mapping is described. A compilation of all cartographic information and bibliography of the study area is made. LANDSAT MSS images on a scale of 1:250,000 are interpreted and a physiographic map with legend is prepared. The study area is inspected and a selection of the sample areas is made. A digital map of the different soil units is produced and the computer mapping units are checked against the soil units encountered in the field. The soil boundaries obtained by automatic mapping were not substantially changed by field work. The accuracy of the automatic mapping is rather high.

  10. An analysis of multi-type relational interactions in FMA using graph motifs with disjointness constraints.

    PubMed

    Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S

    2012-01-01

    The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.

  11. An Analysis of Multi-type Relational Interactions in FMA Using Graph Motifs with Disjointness Constraints

    PubMed Central

    Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S

    2012-01-01

    The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation. PMID:23304382

  12. Robo-AO M Dwarf Multiplicity Survey

    NASA Astrophysics Data System (ADS)

    Lamman, Claire; Baranec, Christoph; Berta-Thompson, Zachory K.; Law, Nicholas M.; Ziegler, Carl; Schonhut-Stasik, Jessica

    2018-06-01

    We analyzed close to 7,000 observations from Robo-AO’s field M dwarf survey taken on the 2.1m Kitt Peak telescope. Results will help determine the total multiplicity fraction and multiplicity functions of M dwarfs, which are crucial steps towards understanding their evolution and formation mechanics. Through its robotic, laser-guided, and automated system, the Robo-AO instrument has yielded the largest adaptive-optics M dwarf survey to date. I developed a graphical user interface to quickly analyze this data. Initial data analysis included assessing data quality, checking the result from Robo-AO’s automatic reduction pipeline, and determining existence as well as the relative position of companions through a visual inspection. This program can be applied to other datasets and was successfully tested by re-analyzing observations from a separate Robo-AO survey. After a conservative initial cut for quality, over 350 companions were found within 4” of a primary star out of 2,746 high quality Robo-AO M dwarf observations, including four triple systems. Further observations were done with the Keck II telescope by using its NIRC2 imager to follow up on ten select targets for the existence and physical association of companions. Future research will yield insights into low-mass stellar formation and provide a database of nearby M dwarf multiples that will potentially assist ongoing and future surveys for planets around these stars, such as the NASA TESS mission.

  13. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  14. The Significance of Quality Assurance within Model Intercomparison Projects at the World Data Centre for Climate (WDCC)

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Hoeck, H.; Stockhause, M.; Lautenschlager, M.

    2014-12-01

    The classical goals of a quality assessment system in the data life cycle are (1) to encourage data creators to improve their quality assessment procedures to reach the next quality level and (2) enable data consumers to decide, whether a dataset has a quality that is sufficient for usage in the target application, i.e. to appraise the data usability for their own purpose.As the data volumes of projects and the interdisciplinarity of data usage grow, the need for homogeneous structure and standardised notation of data and metadata increases. This third aspect is especially valid for the data repositories, as they manage data through machine agents. So checks for homogeneity and consistency in early parts of the workflow become essential to cope with today's data volumes.Selected parts of the workflow in the model intercomparison project CMIP5 and the archival of the data for the interdiscipliary user community of the IPCC-DDC AR5 and the associated quality checks are reviewed. We compare data and metadata checks and relate different types of checks to their positions in the data life cycle.The project's data citation approach is included in the discussion, with focus on temporal aspects of the time necessary to comply with the project's requirements for formal data citations and the demand for the availability of such data citations.In order to make different quality assessments of projects comparable, WDCC developed a generic Quality Assessment System. Based on the self-assessment approach of a maturity matrix, an objective and uniform quality level system for all data at WDCC is derived which consists of five maturity quality levels.

  15. CheckM: assessing the quality of microbial genomes recovered from isolates, single cells, and metagenomes

    PubMed Central

    Parks, Donovan H.; Imelfort, Michael; Skennerton, Connor T.; Hugenholtz, Philip; Tyson, Gene W.

    2015-01-01

    Large-scale recovery of genomes from isolates, single cells, and metagenomic data has been made possible by advances in computational methods and substantial reductions in sequencing costs. Although this increasing breadth of draft genomes is providing key information regarding the evolutionary and functional diversity of microbial life, it has become impractical to finish all available reference genomes. Making robust biological inferences from draft genomes requires accurate estimates of their completeness and contamination. Current methods for assessing genome quality are ad hoc and generally make use of a limited number of “marker” genes conserved across all bacterial or archaeal genomes. Here we introduce CheckM, an automated method for assessing the quality of a genome using a broader set of marker genes specific to the position of a genome within a reference genome tree and information about the collocation of these genes. We demonstrate the effectiveness of CheckM using synthetic data and a wide range of isolate-, single-cell-, and metagenome-derived genomes. CheckM is shown to provide accurate estimates of genome completeness and contamination and to outperform existing approaches. Using CheckM, we identify a diverse range of errors currently impacting publicly available isolate genomes and demonstrate that genomes obtained from single cells and metagenomic data vary substantially in quality. In order to facilitate the use of draft genomes, we propose an objective measure of genome quality that can be used to select genomes suitable for specific gene- and genome-centric analyses of microbial communities. PMID:25977477

  16. CheckM: assessing the quality of microbial genomes recovered from isolates, single cells, and metagenomes.

    PubMed

    Parks, Donovan H; Imelfort, Michael; Skennerton, Connor T; Hugenholtz, Philip; Tyson, Gene W

    2015-07-01

    Large-scale recovery of genomes from isolates, single cells, and metagenomic data has been made possible by advances in computational methods and substantial reductions in sequencing costs. Although this increasing breadth of draft genomes is providing key information regarding the evolutionary and functional diversity of microbial life, it has become impractical to finish all available reference genomes. Making robust biological inferences from draft genomes requires accurate estimates of their completeness and contamination. Current methods for assessing genome quality are ad hoc and generally make use of a limited number of "marker" genes conserved across all bacterial or archaeal genomes. Here we introduce CheckM, an automated method for assessing the quality of a genome using a broader set of marker genes specific to the position of a genome within a reference genome tree and information about the collocation of these genes. We demonstrate the effectiveness of CheckM using synthetic data and a wide range of isolate-, single-cell-, and metagenome-derived genomes. CheckM is shown to provide accurate estimates of genome completeness and contamination and to outperform existing approaches. Using CheckM, we identify a diverse range of errors currently impacting publicly available isolate genomes and demonstrate that genomes obtained from single cells and metagenomic data vary substantially in quality. In order to facilitate the use of draft genomes, we propose an objective measure of genome quality that can be used to select genomes suitable for specific gene- and genome-centric analyses of microbial communities. © 2015 Parks et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Check & Connect: The Importance of Relationships for Promoting Engagement with School

    ERIC Educational Resources Information Center

    Anderson, Amy R.; Christenson, Sandra L.; Sinclair, Mary F.; Lehr, Camilla A.

    2004-01-01

    The purpose of this study was to examine whether the closeness and quality of relationships between intervention staff and students involved in the Check & Connect program were associated with improved student engagement in school. Participants included 80 elementary and middle school students referred to the Check & Connect program for poor…

  18. Rural-Urban Differences in Medicare Quality Outcomes and the Impact of Risk Adjustment.

    PubMed

    Henning-Smith, Carrie; Kozhimannil, Katy; Casey, Michelle; Prasad, Shailendra; Moscovice, Ira

    2017-09-01

    There has been considerable debate in recent years about whether, and how, to risk-adjust quality measures for sociodemographic characteristics. However, geographic location, especially rurality, has been largely absent from the discussion. To examine differences by rurality in quality outcomes, and the impact of adjustment for individual and community-level sociodemographic characteristics on quality outcomes. The 2012 Medicare Current Beneficiary Survey, Access to Care module, combined with the 2012 County Health Rankings. All data used were publicly available, secondary data. We merged the 2012 Medicare Current Beneficiary Survey data with the 2012 County Health Rankings data using county of residence. We compared 6 unadjusted quality of care measures for Medicare beneficiaries (satisfaction with care, blood pressure checked, cholesterol checked, flu shot receipt, change in health status, and all-cause annual readmission) by rurality (rural noncore, micropolitan, and metropolitan). We then ran nested multivariable logistic regression models to assess the impact of adjusting for community and individual-level sociodemographic characteristics to determine whether these mediate the rurality difference in quality of care. The relationship between rurality and change in health status was mediated by the inclusion of community-level characteristics; however, adjusting for community and individual-level characteristics caused differences by rurality to emerge in 2 of the measures: blood pressure checked and cholesterol checked. For all quality scores, model fit improved after adding community and individual characteristics. Quality is multifaceted and is impacted by individual and community-level socio-demographic characteristics, as well as by geographic location. Current debates about risk-adjustment procedures should take rurality into account.

  19. SU-F-T-32: Evaluation of the Performance of a Multiple-Array-Diode Detector for Quality Assurance Tests in High-Dose-Rate Brachytherapy with Ir-192 Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpool, K; De La Fuente Herman, T; Ahmad, S

    Purpose: To evaluate the performance of a two-dimensional (2D) array-diode- detector for geometric and dosimetric quality assurance (QA) tests of high-dose-rate (HDR) brachytherapy with an Ir-192-source. Methods: A phantom setup was designed that encapsulated a two-dimensional (2D) array-diode-detector (MapCheck2) and a catheter for the HDR brachytherapy Ir-192 source. This setup was used to perform both geometric and dosimetric quality assurance for the HDR-Ir192 source. The geometric tests included: (a) measurement of the position of the source and (b) spacing between different dwell positions. The dosimteric tests include: (a) linearity of output with time, (b) end effect and (c) relative dosemore » verification. The 2D-dose distribution measured with MapCheck2 was used to perform the previous tests. The results of MapCheck2 were compared with the corresponding quality assurance testes performed with Gafchromic-film and well-ionization-chamber. Results: The position of the source and the spacing between different dwell-positions were reproducible within 1 mm accuracy by measuring the position of maximal dose using MapCheck2 in contrast to the film which showed a blurred image of the dwell positions due to limited film sensitivity to irradiation. The linearity of the dose with dwell times measured from MapCheck2 was superior to the linearity measured with ionization chamber due to higher signal-to-noise ratio of the diode readings. MapCheck2 provided more accurate measurement of the end effect with uncertainty < 1.5% in comparison with the ionization chamber uncertainty of 3%. Although MapCheck2 did not provide absolute calibration dosimeter for the activity of the source, it provided accurate tool for relative dose verification in HDR-brachytherapy. Conclusion: The 2D-array-diode-detector provides a practical, compact and accurate tool to perform quality assurance for HDR-brachytherapy with an Ir-192 source. The diodes in MapCheck2 have high radiation sensitivity and linearity that is superior to Gafchromic-films and ionization chamber used for geometric and dosimetric QA in HDR-brachytherapy, respectively.« less

  20. 40 CFR Appendix K to Part 75 - Quality Assurance and Operating Procedures for Sorbent Trap Monitoring Systems

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... until the leak check is passed. Post-test leak check ≤4% of average sampling rate After sampling ** See... the test site. The sorbent media must be obtained from a source that can demonstrate the quality...-traceable calibration gas standards and reagents shall be used for the tests and procedures required under...

  1. 49 CFR 40.235 - What are the requirements for proper use and care of ASDs?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ASD on the CPL. Your QAP must specify the methods used for quality control checks, temperatures at which the ASD must be stored and used, the shelf life of the device, and environmental conditions (e.g... the specified quality control checks or that has passed its expiration date. (e) As an employer, with...

  2. 49 CFR 40.235 - What are the requirements for proper use and care of ASDs?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ASD on the CPL. Your QAP must specify the methods used for quality control checks, temperatures at which the ASD must be stored and used, the shelf life of the device, and environmental conditions (e.g... the specified quality control checks or that has passed its expiration date. (e) As an employer, with...

  3. 49 CFR 40.235 - What are the requirements for proper use and care of ASDs?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ASD on the CPL. Your QAP must specify the methods used for quality control checks, temperatures at which the ASD must be stored and used, the shelf life of the device, and environmental conditions (e.g... the specified quality control checks or that has passed its expiration date. (e) As an employer, with...

  4. 49 CFR 40.235 - What are the requirements for proper use and care of ASDs?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ASD on the CPL. Your QAP must specify the methods used for quality control checks, temperatures at which the ASD must be stored and used, the shelf life of the device, and environmental conditions (e.g... the specified quality control checks or that has passed its expiration date. (e) As an employer, with...

  5. 49 CFR 40.235 - What are the requirements for proper use and care of ASDs?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ASD on the CPL. Your QAP must specify the methods used for quality control checks, temperatures at which the ASD must be stored and used, the shelf life of the device, and environmental conditions (e.g... the specified quality control checks or that has passed its expiration date. (e) As an employer, with...

  6. FIELD CHECK MANUAL FOR LANGUAGE LABORATORIES, A SERIES OF TESTS WHICH A NON-TECHNICAL PERSON CAN CONDUCT TO VERIFY SPECIFICATIONS.

    ERIC Educational Resources Information Center

    GRITTNER, FRANK; PAVLAT, RUSSELL

    IN ORDER TO ASSIST NON-TECHNICAL PEOPLE IN SCHOOLS TO CONDUCT A FIELD CHECK OF LANGUAGE LABORATORY EQUIPMENT BEFORE THEY MAKE FINAL PAYMENTS, THIS MANUAL OFFERS CRITERIA, TESTS, AND METHODS OF SCORING THE QUALITY OF THE EQUIPMENT. CHECKLISTS ARE PROVIDED FOR EVALUATING CONSOLE FUNCTIONS, TAPE RECORDERS, AMPLIFIERS, SOUND QUALITY (INCLUDING…

  7. Design and performance of daily quality assurance system for carbon ion therapy at NIRS

    NASA Astrophysics Data System (ADS)

    Saotome, N.; Furukawa, T.; Hara, Y.; Mizushima, K.; Tansho, R.; Saraya, Y.; Shirai, T.; Noda, K.

    2017-09-01

    At National Institute of Radiological Sciences (NIRS), we have been commissioning a rotating-gantry system for carbon-ion radiotherapy. This rotating gantry can transport heavy ions at 430 MeV/u to an isocenter with irradiation angles of ±180° that can rotate around the patient so that the tumor can be irradiated from any direction. A three-dimensional pencil-beam scanning irradiation system equipped with the rotating gantry enables the optimal use of physical characteristics of carbon ions to provide accurate treatment. To ensure the treatment quality using such a complex system, the calibration of the primary dose monitor, output check, range check, dose rate check, machine safety check, and some mechanical tests should be performed efficiently. For this purpose, we have developed a measurement system dedicated for quality assurance (QA) of this gantry system: the Daily QA system. The system consists of an ionization chamber system and a scintillator system. The ionization chamber system is used for the calibration of the primary dose monitor, output check, and dose rate check, and the scintillator system is used for the range check, isocenter, and gantry angle. The performance of the Daily QA system was verified by a beam test. The stability of the output was within 0.5%, and the range was within 0.5 mm. The coincidence of the coordinates between the patient-positioning system and the irradiation system was verified using the Daily QA system. Our present findings verified that the new Daily QA system for a rotating gantry is capable of verifying the irradiation system with sufficient accuracy.

  8. Improving Automated Lexical and Discourse Analysis of Online Chat Dialog

    DTIC Science & Technology

    2007-09-01

    include spelling- and grammar-checking on our word processing software; voice-recognition in our automobiles; and telephone-based conversational agents ...conversational agents can help customers make purchases on-line [3]. In addition, discourse analyzers can automatically separate multiple, interleaved...telephone-based conversational agent needs to know if it was asked a question or tasked to do something. Indeed, Stolcke et al demonstrated that

  9. Enhanced invitation methods to increase uptake of NHS health checks: study protocol for a randomized controlled trial.

    PubMed

    Forster, Alice S; Burgess, Caroline; McDermott, Lisa; Wright, Alison J; Dodhia, Hiten; Conner, Mark; Miller, Jane; Rudisill, Caroline; Cornelius, Victoria; Gulliford, Martin C

    2014-08-30

    NHS Health Checks is a new program for primary prevention of heart disease, stroke, diabetes, chronic kidney disease, and vascular dementia in adults aged 40 to 74 years in England. Individuals without existing cardiovascular disease or diabetes are invited for a Health Check every 5 years. Uptake among those invited is lower than anticipated. The project is a three-arm randomized controlled trial to test the hypothesis that enhanced invitation methods, using the Question-Behaviour Effect (QBE), will increase uptake of NHS Health Checks compared with a standard invitation. Participants comprise individuals eligible for an NHS Health Check registered in two London boroughs. Participants are randomized into one of three arms. Group A receives the standard NHS Health Check invitation letter, information sheet, and reminder letter at 12 weeks for nonattenders. Group B receives a QBE questionnaire 1 week before receiving the standard invitation, information sheet, and reminder letter where appropriate. Group C is the same as Group B, but participants are offered a £5 retail voucher if they return the questionnaire. Participants are randomized in equal proportions, stratified by general practice. The primary outcome is uptake of NHS Health Checks 6 months after invitation from electronic health records. We will estimate the incremental health service cost per additional completed Health Check for trial groups B and C versus trial arm A, as well as evaluating the impact of the QBE questionnaire, and questionnaire plus voucher, on the socioeconomic inequality in uptake of Health Checks.The trial includes a nested comparison of two methods for implementing allocation, one implemented manually at general practices and the other implemented automatically through the information systems used to generate invitations for the Health Check. The research will provide evidence on whether asking individuals to complete a preliminary questionnaire, by using the QBE, is effective in increasing uptake of Health Checks and whether an incentive alters questionnaire return rates as well as uptake of Health Checks. The trial interventions can be readily translated into routine service delivery if they are shown to be cost-effective. Current Controlled Trials ISRCTN42856343. Date registered: 21.03.2013.

  10. Knowledge-based critiquing of graphical user interfaces with CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jianping; Murphy, Elizabeth D.; Carter, Leslie E.; Truszkowski, Walter F.

    1994-01-01

    CHIMES is a critiquing tool that automates the process of checking graphical user interface (GUI) designs for compliance with human factors design guidelines and toolkit style guides. The current prototype identifies instances of non-compliance and presents problem statements, advice, and tips to the GUI designer. Changes requested by the designer are made automatically, and the revised GUI is re-evaluated. A case study conducted at NASA-Goddard showed that CHIMES has the potential for dramatically reducing the time formerly spent in hands-on consistency checking. Capabilities recently added to CHIMES include exception handling and rule building. CHIMES is intended for use prior to usability testing as a means, for example, of catching and correcting syntactic inconsistencies in a larger user interface.

  11. Document analysis with neural net circuits

    NASA Technical Reports Server (NTRS)

    Graf, Hans Peter

    1994-01-01

    Document analysis is one of the main applications of machine vision today and offers great opportunities for neural net circuits. Despite more and more data processing with computers, the number of paper documents is still increasing rapidly. A fast translation of data from paper into electronic format is needed almost everywhere, and when done manually, this is a time consuming process. Markets range from small scanners for personal use to high-volume document analysis systems, such as address readers for the postal service or check processing systems for banks. A major concern with present systems is the accuracy of the automatic interpretation. Today's algorithms fail miserably when noise is present, when print quality is poor, or when the layout is complex. A common approach to circumvent these problems is to restrict the variations of the documents handled by a system. In our laboratory, we had the best luck with circuits implementing basic functions, such as convolutions, that can be used in many different algorithms. To illustrate the flexibility of this approach, three applications of the NET32K circuit are described in this short viewgraph presentation: locating address blocks, cleaning document images by removing noise, and locating areas of interest in personal checks to improve image compression. Several of the ideas realized in this circuit that were inspired by neural nets, such as analog computation with a low resolution, resulted in a chip that is well suited for real-world document analysis applications and that compares favorably with alternative, 'conventional' circuits.

  12. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  13. Model Checking Degrees of Belief in a System of Agents

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Primero, Giuseppe; Rungta, Neha

    2014-01-01

    Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.

  14. Disclosure Control of Natural Language Information to Enable Secure and Enjoyable Communication over the Internet

    NASA Astrophysics Data System (ADS)

    Kataoka, Haruno; Utsumi, Akira; Hirose, Yuki; Yoshiura, Hiroshi

    Disclosure control of natural language information (DCNL), which we are trying to realize, is described. DCNL will be used for securing human communications over the internet, such as through blogs and social network services. Before sentences in the communications are disclosed, they are checked by DCNL and any phrases that could reveal sensitive information are transformed or omitted so that they are no longer revealing. DCNL checks not only phrases that directly represent sensitive information but also those that indirectly suggest it. Combinations of phrases are also checked. DCNL automatically learns the knowledge of sensitive phrases and the suggestive relations between phrases by using co-occurrence analysis and Web retrieval. The users' burden is therefore minimized, i.e., they do not need to define many disclosure control rules. DCNL complements the traditional access control in the fields where reliability needs to be balanced with enjoyment and objects classes for the access control cannot be predefined.

  15. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    NASA Astrophysics Data System (ADS)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what result in the detection of some control tests that are no longer adequate, due to different reasons. After the modification of some test, based in our experience, a set of quality control tests is now presented, updated according to technology advances and classified. The presented set of quality tests allows radiation and meteorological data to be tested in order to know their plausibility to be used as inputs in theoretical or empirical methods for scientific research. The research leading to those results has partly receive funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 262892 (ENDORSE project).

  16. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  17. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    NASA Astrophysics Data System (ADS)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and calculations in an automatic fashion, except the need of a human interaction upon specific issues (such as the decision whether a measurement is an anomaly or not according to the detected temporal and spatial variations with contiguous points). The presented computer program runs from command line and shows peculiar characteristics in the cascade modelling within different contexts belonging to agriculture, phytopathology and environment. In particular, it can be a powerful tool to set up cutting-edge regional web services based on weather information. Indeed, it can support territorial agencies in charge of meteorological and phytopathological bulletins.

  18. Research on the Effect of Welding Speed on the Quality of Welding Seam Based on the Local Dry Underwater Welding

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Chen, Wu; Wang, Huagang; Ba, Jinyu; Li, Bing

    2017-12-01

    The repair of nuclear spent fuel pool has a high requirement for the quality of welding, the welding speed directly affects the quality of the weld when local dry automatic underwater welding is used to repair the damaged surface. Under the condition of the same condition, the local dry automatic underwater welding test was carried out under the condition of the same welding condition. Taking the 20cm as the experimental condition, after massive experiments show that when the welding speed is approximately 48cm/min the weld quality is high, meeting the design requirements, based on the double layer shrinkage nozzle chamber of local dry underwater automatic welding.

  19. Progress on the Journey to Total Quality Management: Using the Myers-Briggs Type Indicator and the Adjective Check List in Management Development.

    ERIC Educational Resources Information Center

    Mani, Bonnie G.

    1995-01-01

    In an Internal Revenue Service office using total quality management (TQM), the management development program uses Myers Briggs Type Indicator and Adjective Check List for manager self-assessment. Because management commitment is essential to TQM, the process is a way of enhancing leadership skills and demonstrating appreciation of diversity. (SK)

  20. The Automation of Nowcast Model Assessment Processes

    DTIC Science & Technology

    2016-09-01

    that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data

  1. Software Implements a Space-Mission File-Transfer Protocol

    NASA Technical Reports Server (NTRS)

    Rundstrom, Kathleen; Ho, Son Q.; Levesque, Michael; Sanders, Felicia; Burleigh, Scott; Veregge, John

    2004-01-01

    CFDP is a computer program that implements the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol, which is an international standard for automatic, reliable transfers of files of data between locations on Earth and in outer space. CFDP administers concurrent file transfers in both directions, delivery of data out of transmission order, reliable and unreliable transmission modes, and automatic retransmission of lost or corrupted data by use of one or more of several lost-segment-detection modes. The program also implements several data-integrity measures, including file checksums and optional cyclic redundancy checks for each protocol data unit. The metadata accompanying each file can include messages to users application programs and commands for operating on remote file systems.

  2. Benefit of the Use of GCxGC/MS Profiles for 1D GC/MS Data Treatment Illustrated by the Analysis of Pyrolysis Products from East Asian Handmade Papers

    NASA Astrophysics Data System (ADS)

    Han, Bin; Lob, Silvia; Sablier, Michel

    2018-06-01

    In this study, we report the use of pyrolysis-GCxGC/MS profiles for an optimized treatment of data issued from pyrolysis-GC/MS combined with the automatic deconvolution software Automated Mass Spectral Deconvolution and Identification System (AMDIS). The method was illustrated by the characterization of marker compounds of East Asian handmade papers through the examination of pyrolysis-GCxGC/MS data to get information which was used for manually identifying low concentrated and co-eluting compounds in 1D GC/MS data. The results showed that the merits of a higher separation power for co-eluting compounds and a better sensitivity for low concentration compounds offered by a GCxGC system can be used effectively for AMDIS 1D GC/MS data treatment: (i) the compound distribution in pyrolysis-GCxGC/MS profiles can be used as "peak finder" for manual check of low concentration and co-eluting compound identification in 1D GC/MS data, and (ii) pyrolysis-GCxGC/MS profiles can provide better quality mass spectra with observed higher match factors in the AMDIS automatic match process. The combination of 2D profile with AMDIS was shown to contribute efficiently to a better characterization of compound profiles in the chromatograms obtained by 1D analysis in focusing on the mass spectral identification. [Figure not available: see fulltext.

  3. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541

  4. Advances in the quantification of mitochondrial function in primary human immune cells through extracellular flux analysis.

    PubMed

    Nicholas, Dequina; Proctor, Elizabeth A; Raval, Forum M; Ip, Blanche C; Habib, Chloe; Ritou, Eleni; Grammatopoulos, Tom N; Steenkamp, Devin; Dooms, Hans; Apovian, Caroline M; Lauffenburger, Douglas A; Nikolajczyk, Barbara S

    2017-01-01

    Numerous studies show that mitochondrial energy generation determines the effectiveness of immune responses. Furthermore, changes in mitochondrial function may regulate lymphocyte function in inflammatory diseases like type 2 diabetes. Analysis of lymphocyte mitochondrial function has been facilitated by introduction of 96-well format extracellular flux (XF96) analyzers, but the technology remains imperfect for analysis of human lymphocytes. Limitations in XF technology include the lack of practical protocols for analysis of archived human cells, and inadequate data analysis tools that require manual quality checks. Current analysis tools for XF outcomes are also unable to automatically assess data quality and delete untenable data from the relatively high number of biological replicates needed to power complex human cell studies. The objectives of work presented herein are to test the impact of common cellular manipulations on XF outcomes, and to develop and validate a new automated tool that objectively analyzes a virtually unlimited number of samples to quantitate mitochondrial function in immune cells. We present significant improvements on previous XF analyses of primary human cells that will be absolutely essential to test the prediction that changes in immune cell mitochondrial function and fuel sources support immune dysfunction in chronic inflammatory diseases like type 2 diabetes.

  5. Automatic, nondestructive test monitors in-process weld quality

    NASA Technical Reports Server (NTRS)

    Deal, F. C.

    1968-01-01

    Instrument automatically and nondestructively monitors the quality of welds produced in microresistance welding. It measures the infrared energy generated in the weld as the weld is made and compares this energy with maximum and minimum limits of infrared energy values previously correlated with acceptable weld-strength tolerances.

  6. MO-B-BRB-01: Optimize Treatment Planning Process in Clinical Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, W.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  7. MO-B-BRB-00: Optimizing the Treatment Planning Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  8. MO-B-BRB-03: Systems Engineering Tools for Treatment Planning Process Optimization in Radiation Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapur, A.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  9. Computing Environments for Data Analysis. Part 3. Programming Environments.

    DTIC Science & Technology

    1986-05-21

    to understand how the existing system works and how to modify them to get the desired effect. This depends on the programming ...editor that performs automatic syntax checking for all the programming languages. 3.3 How S fits in To make efficient use of the machine (maximize the... programming , manuscript from Symbolics, Inc., 5 Cambridge Center, Cambridge, Mass. 02142. [101 DEITEL H.M., (1983) An Introduction to Operating

  10. Design of wideband solar ultraviolet radiation intensity monitoring and control system

    NASA Astrophysics Data System (ADS)

    Ye, Linmao; Wu, Zhigang; Li, Yusheng; Yu, Guohe; Jin, Qi

    2009-08-01

    According to the principle of SCM (Single Chip Microcomputer) and computer communication technique, the system is composed of chips such as ATML89C51, ADL0809, integrated circuit and sensors for UV radiation, which is designed for monitoring and controlling the UV index. This system can automatically collect the UV index data, analyze and check the history database, research the law of UV radiation in the region.

  11. YIP Formal Synthesis of Software-Based Control Protocols for Fractionated,Composable Autonomous Systems

    DTIC Science & Technology

    2016-07-08

    Systems Using Automata Theory and Barrier Certifi- cates We developed a sound but incomplete method for the computational verification of specifications...method merges ideas from automata -based model checking with those from control theory including so-called barrier certificates and optimization-based... Automata theory meets barrier certificates: Temporal logic verification of nonlinear systems,” IEEE Transactions on Automatic Control, 2015. [J2] R

  12. Qualitative Maintenance Experience Handbook. P-3C/S-3A Supplement.

    DTIC Science & Technology

    1977-06-15

    axle which automatically assists in disc alignment, a positive feature, in easing maintenance and preventing casual damage. Brake assemblies should...Reverse Of Removal Use brake tool to align brake discs . After Installation Actions: _ Bleed _ Rig _ Adjust X Service Lubricate - Boresight. _ Other...Hydraulic I Access Required: Test Equipment Required: Actions: 1. Check clearance of discs after brakes are put on. 2. Apply brakes . 8 ANALYST’S COMMENTS

  13. Operator’s Manual. Prototype Heavy Rescue/Fire Fighting Vehicle

    DTIC Science & Technology

    1980-09-01

    system for emergency operation if pressure is lost in either parking or service brake systems . The system is operational automatically and is...controlled by the foot treadle ’sive. It will provide for TWO full brake applications and ONE release. ELECTRICAL SYSTEM A dual battery system is utilized for...cleaner. * Lubricate chassis. . Repack wheel bearings. . Inspect brake system and adjust brakes . . Replace fuel filter. . Check high and low idle.

  14. a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab

    NASA Astrophysics Data System (ADS)

    Toscano, G. J.; Gopalam, U.; Devarajan, V.

    2013-08-01

    Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.

  15. Concurrent hypercube system with improved message passing

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Tuazon, Jesus O. (Inventor); Lieberman, Don (Inventor); Pniel, Moshe (Inventor)

    1989-01-01

    A network of microprocessors, or nodes, are interconnected in an n-dimensional cube having bidirectional communication links along the edges of the n-dimensional cube. Each node's processor network includes an I/O subprocessor dedicated to controlling communication of message packets along a bidirectional communication link with each end thereof terminating at an I/O controlled transceiver. Transmit data lines are directly connected from a local FIFO through each node's communication link transceiver. Status and control signals from the neighboring nodes are delivered over supervisory lines to inform the local node that the neighbor node's FIFO is empty and the bidirectional link between the two nodes is idle for data communication. A clocking line between neighbors, clocks a message into an empty FIFO at a neighbor's node and vica versa. Either neighbor may acquire control over the bidirectional communication link at any time, and thus each node has circuitry for checking whether or not the communication link is busy or idle, and whether or not the receive FIFO is empty. Likewise, each node can empty its own FIFO and in turn deliver a status signal to a neighboring node indicating that the local FIFO is empty. The system includes features of automatic message rerouting, block message transfer and automatic parity checking and generation.

  16. Project Report: Automatic Sequence Processor Software Analysis

    NASA Technical Reports Server (NTRS)

    Benjamin, Brandon

    2011-01-01

    The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.

  17. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Health Checks in Primary Care for Adults with Intellectual Disabilities: How Extensive Should They Be?

    ERIC Educational Resources Information Center

    Chauhan, U.; Kontopantelis, E.; Campbell, S.; Jarrett, H.; Lester, H.

    2010-01-01

    Background: Routine health checks have gained prominence as a way of detecting unmet need in primary care for adults with intellectual disabilities (ID) and general practitioners are being incentivised in the UK to carry out health checks for many conditions through an incentivisation scheme known as the Quality and Outcomes Framework (QOF).…

  19. Automatic Assembly of Combined Checkingfixture for Auto-Body Components Based Onfixture Elements Libraries

    NASA Astrophysics Data System (ADS)

    Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi

    In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.

  20. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  1. CT-based patient modeling for head and neck hyperthermia treatment planning: manual versus automatic normal-tissue-segmentation.

    PubMed

    Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M

    2014-04-01

    Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers

    NASA Astrophysics Data System (ADS)

    Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly

    2018-03-01

    The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.

  3. Model Checking - My 27-Year Quest to Overcome the State Explosion Problem

    NASA Technical Reports Server (NTRS)

    Clarke, Ed

    2009-01-01

    Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.

  4. Superiority of automatic remote monitoring compared with in-person evaluation for scheduled ICD follow-up in the TRUST trial - testing execution of the recommendations

    PubMed Central

    Varma, Niraj; Michalski, Justin; Stambler, Bruce; Pavri, Behzad B.

    2014-01-01

    Aims To test recommended implantable cardioverter defibrillator (ICD) follow-up methods by ‘in-person evaluations’ (IPE) vs. ‘remote Home Monitoring’ (HM). Methods and results ICD patients were randomized 2:1 to automatic HM or to Conventional monitoring, with follow-up checks scheduled at 3, 6, 9, 12, and 15 months post-implant. Conventional patients were evaluated with IPE only. Home Monitoring patients were assessed remotely only for 1 year between 3 and 15 month evaluations. Adherence to follow-up was measured. HM and Conventional patients were similar (age 63 years, 72% male, left ventricular ejection fraction 29%, primary prevention 73%, DDD 57%). Conventional management suffered greater patient attrition during the trial (20.1 vs. 14.2% in HM, P = 0.007). Three month follow-up occurred in 84% in both groups. There was 100% adherence (5 of 5 checks) in 47.3% Conventional vs. 59.7% HM (P < 0.001). Between 3 and 15 months, HM exhibited superior (2.2×) adherence to scheduled follow-up [incidence of failed follow up was 146 of 2421 (6.0%) in HM vs. 145 of 1098 (13.2%) in Conventional, P < 0.001] and punctuality. In HM (daily transmission success rate median 91%), transmission loss caused only 22 of 2275 (0.97%) failed HM evaluations between 3 and 15 months; others resulted from clinic oversight. Overall IPE failure rate in Conventional [193 of 1841 (10.5%) exceeded that in HM [97 of 1484 (6.5%), P < 0.001] by 62%, i.e. HM patients remained more loyal to IPE when this was mandated. Conclusion Automatic remote monitoring better preserves patient retention and adherence to scheduled follow-up compared with IPE. Clinical trial registration NCT00336284. PMID:24595864

  5. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    PubMed

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  6. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    PubMed

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    NASA Astrophysics Data System (ADS)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  8. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  10. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  11. Development of a 30mm Frangible Projectile Crimper

    DTIC Science & Technology

    1977-02-01

    located at end of tank. Open drain valve to drain condensation Tht outomatic lank drain equipped compressor makes this unnecessary. PRESSURE SWITCH : The... pressure switch is automatic and will start compressor at the low pressure and stop when the maximum pressure is leached. It is adjusted to start...of the check valve, located between the compressor and the tank, together with the relief valve on pressure switch relief valve units, and the cen

  12. A Register of Underwater Acoustic Facilities. Volume 1. Western Europe

    DTIC Science & Technology

    1987-03-01

    production is still under the direction of Viggo Kjaer, while Per V. Briiel continues to direct the world-wide sales operation. 2-29 TD 7903-1...which are then made available for general sale and distribution. IKU is currently involved in a survey of areas in the Barents Sea which lie in...17 itun /3h measuring mode. Buoy surveillance: ARGOS system for automatic posi- tioning and data transfer of internal instrument checks

  13. Laboratory Evaluation of Light Obscuration Particle Counter Contamination Limits for Aviation Fuel

    DTIC Science & Technology

    2015-11-01

    diesel product for ground use (1). At a minimum free water and particulate by color (as specified in the appendix of ASTM D2276) are checked daily...used in the hydraulics/hydraulic fluid industry. In 1999 ISO adopted ISO 11171 Hydraulic fluid power — Calibration of automatic particle counters...for liquids, replacing ISO 4402, as an international standard for the calibration of liquid particle counters giving NIST traceability to particle

  14. Addressing the Heterogeneity of Subject Indexing in the ADS Databases

    NASA Astrophysics Data System (ADS)

    Dubin, David S.

    A drawback of the current document representation scheme in the ADS abstract service is its heterogeneous subject indexing. Several related but inconsistent indexing languages are represented in ADS. A method of reconciling some indexing inconsistencies is described. Using lexical similarity alone, one out of six ADS descriptors can be automatically mapped to some other descriptor. Analysis of postings data can direct administrators to those mergings it is most important to check for errors.

  15. Image based automatic water meter reader

    NASA Astrophysics Data System (ADS)

    Jawas, N.; Indrianto

    2018-01-01

    Water meter is used as a tool to calculate water consumption. This tool works by utilizing water flow and shows the calculation result with mechanical digit counter. Practically, in everyday use, an operator will manually check the digit counter periodically. The Operator makes logs of the number shows by water meter to know the water consumption. This manual operation is time consuming and prone to human error. Therefore, in this paper we propose an automatic water meter digit reader from digital image. The digits sequence is detected by utilizing contour information of the water meter front panel.. Then an OCR method is used to get the each digit character. The digit sequence detection is an important part of overall process. It determines the success of overall system. The result shows promising results especially in sequence detection.

  16. Impact of dose calibrators quality control programme in Argentina

    NASA Astrophysics Data System (ADS)

    Furnari, J. C.; de Cabrejas, M. L.; del C. Rotta, M.; Iglicki, F. A.; Milá, M. I.; Magnavacca, C.; Dima, J. C.; Rodríguez Pasqués, R. H.

    1992-02-01

    The national Quality Control (QC) programme for radionuclide calibrators started 12 years ago. Accuracy and the implementation of a QC programme were evaluated over all these years at 95 nuclear medicine laboratories where dose calibrators were in use. During all that time, the Metrology Group of CNEA has distributed 137Cs sealed sources to check stability and has been performing periodic "checking rounds" and postal surveys using unknown samples (external quality control). An account of the results of both methods is presented. At present, more of 65% of the dose calibrators measure activities with an error less than 10%.

  17. A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC

    PubMed Central

    Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich

    2015-01-01

    Objective To create a multilingual gold-standard corpus for biomedical concept recognition. Materials and methods We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. Results The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. Discussion The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. Conclusion To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. PMID:25948699

  18. From Lexical Regularities to Axiomatic Patterns for the Quality Assurance of Biomedical Terminologies and Ontologies.

    PubMed

    van Damme, Philip; Quesada-Martínez, Manuel; Cornet, Ronald; Fernández-Breis, Jesualdo Tomás

    2018-06-13

    Ontologies and terminologies have been identified as key resources for the achievement of semantic interoperability in biomedical domains. The development of ontologies is performed as a joint work by domain experts and knowledge engineers. The maintenance and auditing of these resources is also the responsibility of such experts, and this is usually a time-consuming, mostly manual task. Manual auditing is impractical and ineffective for most biomedical ontologies, especially for larger ones. An example is SNOMED CT, a key resource in many countries for codifying medical information. SNOMED CT contains more than 300000 concepts. Consequently its auditing requires the support of automatic methods. Many biomedical ontologies contain natural language content for humans and logical axioms for machines. The 'lexically suggest, logically define' principle means that there should be a relation between what is expressed in natural language and as logical axioms, and that such a relation should be useful for auditing and quality assurance. Besides, the meaning of this principle is that the natural language content for humans could be used to generate the logical axioms for the machines. In this work, we propose a method that combines lexical analysis and clustering techniques to (1) identify regularities in the natural language content of ontologies; (2) cluster, by similarity, labels exhibiting a regularity; (3) extract relevant information from those clusters; and (4) propose logical axioms for each cluster with the support of axiom templates. These logical axioms can then be evaluated with the existing axioms in the ontology to check their correctness and completeness, which are two fundamental objectives in auditing and quality assurance. In this paper, we describe the application of the method to two SNOMED CT modules, a 'congenital' module, obtained using concepts exhibiting the attribute Occurrence - Congenital, and a 'chronic' module, using concepts exhibiting the attribute Clinical course - Chronic. We obtained a precision and a recall of respectively 75% and 28% for the 'congenital' module, and 64% and 40% for the 'chronic' one. We consider these results to be promising, so our method can contribute to the support of content editors by using automatic methods for assuring the quality of biomedical ontologies and terminologies. Copyright © 2018. Published by Elsevier Inc.

  19. Recent advances in the Lesser Antilles observatories Part 2 : WebObs - an integrated web-based system for monitoring and networks management

    NASA Astrophysics Data System (ADS)

    Beauducel, François; Bosson, Alexis; Randriamora, Frédéric; Anténor-Habazac, Christian; Lemarchand, Arnaud; Saurel, Jean-Marie; Nercessian, Alexandre; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Seismological and Volcanological observatories have common needs and often common practical problems for multi disciplinary data monitoring applications. In fact, access to integrated data in real-time and estimation of measurements uncertainties are keys for an efficient interpretation, but instruments variety, heterogeneity of data sampling and acquisition systems lead to difficulties that may hinder crisis management. In Guadeloupe observatory, we have developed in the last years an operational system that attempts to answer the questions in the context of a pluri-instrumental observatory. Based on a single computer server, open source scripts (Matlab, Perl, Bash, Nagios) and a Web interface, the system proposes: an extended database for networks management, stations and sensors (maps, station file with log history, technical characteristics, meta-data, photos and associated documents); a web-form interfaces for manual data input/editing and export (like geochemical analysis, some of the deformation measurements, ...); routine data processing with dedicated automatic scripts for each technique, production of validated data outputs, static graphs on preset moving time intervals, and possible e-mail alarms; computers, acquisition processes, stations and individual sensors status automatic check with simple criteria (files update and signal quality), displayed as synthetic pages for technical control. In the special case of seismology, WebObs includes a digital stripchart multichannel continuous seismogram associated with EarthWorm acquisition chain (see companion paper Part 1), event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps accessed through a user request form. This system leads to a real-time Internet access for integrated monitoring and becomes a strong support for scientists and technicians exchange, and is widely open to interdisciplinary real-time modeling. It has been set up at Martinique observatory and installation is planned this year at Montserrat Volcanological Observatory. It also in production at the geomagnetic observatory of Addis Abeba in Ethiopia.

  20. Orbital Signature Analyzer (OSA): A spacecraft health/safety monitoring and analysis tool

    NASA Technical Reports Server (NTRS)

    Weaver, Steven; Degeorges, Charles; Bush, Joy; Shendock, Robert; Mandl, Daniel

    1993-01-01

    Fixed or static limit sensing is employed in control centers to ensure that spacecraft parameters remain within a nominal range. However, many critical parameters, such as power system telemetry, are time-varying and, as such, their 'nominal' range is necessarily time-varying as well. Predicted data, manual limits checking, and widened limit-checking ranges are often employed in an attempt to monitor these parameters without generating excessive limits violations. Generating predicted data and manual limits checking are both resource intensive, while broadening limit ranges for time-varying parameters is clearly inadequate to detect all but catastrophic problems. OSA provides a low-cost solution by using analytically selected data as a reference upon which to base its limits. These limits are always defined relative to the time-varying reference data, rather than as fixed upper and lower limits. In effect, OSA provides individual limits tailored to each value throughout all the data. A side benefit of using relative limits is that they automatically adjust to new reference data. In addition, OSA provides a wealth of analytical by-products in its execution.

  1. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  2. Low-cost and high-speed optical mark reader based on an intelligent line camera

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  3. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  4. The influence of lathe check depth and orientation on the bond quality of phenol-formaldehyde-bonded birch plywood

    Treesearch

    Anti Rohumaa; Christopher G. Hunt; Mark Hughes; Charles R. Frihart; Janne Logren

    2013-01-01

    During the rotary peeling of veneer for plywood or the laminated veneer lumber manufacture, checks are formed in the veneer that are as deep as 70 – 80 % of the veneer thickness. The results of this study show that, during adhesive bond testing, deep lathe checks in birch (Betula pendula Roth.) veneer significantly reduce the shear strength and the...

  5. Semantic transference for enriching multilingual biomedical knowledge resources.

    PubMed

    Pérez, María; Berlanga, Rafael

    2015-12-01

    Biomedical knowledge resources (KRs) are mainly expressed in English, and many applications using them suffer from the scarcity of knowledge in non-English languages. The goal of the present work is to take maximum profit from existing multilingual biomedical KRs lexicons to enrich their non-English counterparts. We propose to combine different automatic methods to generate pair-wise language alignments. More specifically, we use two well-known translation methods (GIZA++ and Moses), and we propose a new ad hoc method specially devised for multilingual KRs. Then, resulting alignments are used to transfer semantics between KRs across their languages. Transference quality is ensured by checking the semantic coherence of the generated alignments. Experiments have been carried out over the Spanish, French and German UMLS Metathesaurus counterparts. As a result, the enriched Spanish KR can grow up to 1,514,217 concepts (originally 286,659), the French KR up to 1,104,968 concepts (originally 83,119), and the German KR up to 1,136,020 concepts (originally 86,842). Copyright © 2015 Elsevier Inc. All rights reserved.

  6. SHARD - a SeisComP3 module for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Weber, B.; Becker, J.; Ellguth, E.; Henneberger, R.; Herrnkind, S.; Roessler, D.

    2016-12-01

    Monitoring building and structure response to strong earthquake ground shaking or human-induced vibrations in real-time forms the backbone of modern structural health monitoring (SHM). The continuous data transmission, processing and analysis reduces drastically the time decision makers need to plan for appropriate response to possible damages of high-priority buildings and structures. SHARD is a web browser based module using the SeisComp3 framework to monitor the structural health of buildings and other structures by calculating standard engineering seismology parameters and checking their exceedance in real-time. Thresholds can be defined, e.g. compliant with national building codes (IBC2000, DIN4149 or EC8), for PGA/PGV/PGD, response spectra and drift ratios. In case thresholds are exceeded automatic or operator driven reports are generated and send to the decision makers. SHARD also determines waveform quality in terms of data delay and variance to report sensor status. SHARD is the perfect tool for civil protection to monitor simultaneously multiple city-wide critical infrastructure as hospitals, schools, governmental buildings and structures as bridges, dams and power substations.

  7. RAINBIO: a mega-database of tropical African vascular plants distributions

    PubMed Central

    Dauby, Gilles; Zaiss, Rainer; Blach-Overgaard, Anne; Catarino, Luís; Damen, Theo; Deblauwe, Vincent; Dessein, Steven; Dransfield, John; Droissart, Vincent; Duarte, Maria Cristina; Engledow, Henry; Fadeur, Geoffrey; Figueira, Rui; Gereau, Roy E.; Hardy, Olivier J.; Harris, David J.; de Heij, Janneke; Janssens, Steven; Klomberg, Yannick; Ley, Alexandra C.; Mackinder, Barbara A.; Meerts, Pierre; van de Poel, Jeike L.; Sonké, Bonaventure; Sosef, Marc S. M.; Stévart, Tariq; Stoffelen, Piet; Svenning, Jens-Christian; Sepulchre, Pierre; van der Burgt, Xander; Wieringa, Jan J.; Couvreur, Thomas L. P.

    2016-01-01

    Abstract The tropical vegetation of Africa is characterized by high levels of species diversity but is undergoing important shifts in response to ongoing climate change and increasing anthropogenic pressures. Although our knowledge of plant species distribution patterns in the African tropics has been improving over the years, it remains limited. Here we present RAINBIO, a unique comprehensive mega-database of georeferenced records for vascular plants in continental tropical Africa. The geographic focus of the database is the region south of the Sahel and north of Southern Africa, and the majority of data originate from tropical forest regions. RAINBIO is a compilation of 13 datasets either publicly available or personal ones. Numerous in depth data quality checks, automatic and manual via several African flora experts, were undertaken for georeferencing, standardization of taxonomic names and identification and merging of duplicated records. The resulting RAINBIO data allows exploration and extraction of distribution data for 25,356 native tropical African vascular plant species, which represents ca. 89% of all known plant species in the area of interest. Habit information is also provided for 91% of these species. PMID:28127234

  8. Financial management using a computerized system for evaluating health care invoices.

    PubMed

    Magnezi, Racheli; Ashkenazi, Isaac

    2005-02-01

    The Medical Corps of the Israel Defense Forces (IDF) provides health care services for hundreds of thousands of soldiers in IDF clinics and by purchasing services from civilian institutes. Monthly invoices from civilian institutes are so numerous that most are paid with insufficient scrutiny and valuable information regarding soldiers' health care is lost. Our objective was to develop a computerized system for reviewing invoices and gathering data. Based on Oracle software (Oracle, Redwood Shores, California), the system stores the terms of agreements with medical institutes, enters billing data, calculates invoice totals, manages information, and generates reports. It automatically checks for duplicate invoices and confirms payment. The system allows users to view data for decision-making, creates insurance claim files, identifies incorrect charges, assists in quality assurance, and maintains personal patient records. With the system in operation since 2001, savings significantly increased, to approximately 5% of the IDF health care budget. On the basis of information gathered by the system, changes in medical procedures were implemented that are expected to generate even greater savings.

  9. Quality Control of Meteorological Observations

    NASA Technical Reports Server (NTRS)

    Collins, William; Dee, Dick; Rukhovets, Leonid

    1999-01-01

    For the first time, a problem of the meteorological observation quality control (QC) was formulated by L.S. Gandin at the Main Geophysical Observatory in the 70's. Later in 1988 L.S. Gandin began adapting his ideas in complex quality control (CQC) to the operational environment at the National Centers for Environmental Prediction. The CQC was first applied by L.S.Gandin and his colleagues to detection and correction of errors in rawinsonde heights and temperatures using a complex of hydrostatic residuals.Later, a full complex of residuals, vertical and horizontal optimal interpolations and baseline checks were added for the checking and correction of a wide range of meteorological variables. some other of Gandin's ideas were applied and substantially developed at other meteorological centers. A new statistical QC was recently implemented in the Goddard Data Assimilation System. The central component of any quality control is a buddy check which is a test of individual suspect observations against available nearby non-suspect observations. A novel feature of this test is that the error variances which are used for QC decision are re-estimated on-line. As a result, the allowed tolerances for suspect observations can depend on local atmospheric conditions. The system is then better able to accept extreme values observed in deep cyclones, jet streams and so on. The basic statements of this adaptive buddy check are described. Some results of the on-line QC including moisture QC are presented.

  10. Automatic Rotational Sky Quality Meter (R-SQM) Design and Software for Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Dogan, E.; Ozbaldan, E. E.; Shameoni, Niaei M.; Yesilyaprak, C.

    2016-12-01

    We have presented the new design of Sky Quality Meter (SQM) device that is an automatic rotational model of sky quality meter (R-SQM) carried out by DAG (Eastern Anatolia Observatory) Technical Team. R-SQM is required for determining the long-term changes of sky quality of an astronomical observatory and consists of four SQM devices mounted on a rotating shaft with different angles for scanning all sky. This system is controlled by a Raspberry Pi control card and a step motor with its driver and a special software.

  11. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  12. Incorporating Learning Characteristics into Automatic Essay Scoring Models: What Individual Differences and Linguistic Features Tell Us about Writing Quality

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…

  13. Feasibility Study on Fully Automatic High Quality Translation: Volume I. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    The object of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of past and projected advances in linguistic theory and hardware/software capability. This first volume of a two-volume report discusses the requirements of translation and aspects of human and machine…

  14. A shared computer-based problem-oriented patient record for the primary care team.

    PubMed

    Linnarsson, R; Nordgren, K

    1995-01-01

    1. INTRODUCTION. A computer-based patient record (CPR) system, Swedestar, has been developed for use in primary health care. The principal aim of the system is to support continuous quality improvement through improved information handling, improved decision-making, and improved procedures for quality assurance. The Swedestar system has evolved during a ten-year period beginning in 1984. 2. SYSTEM DESIGN. The design philosophy is based on the following key factors: a shared, problem-oriented patient record; structured data entry based on an extensive controlled vocabulary; advanced search and query functions, where the query language has the most important role; integrated decision support for drug prescribing and care protocols and guidelines; integrated procedures for quality assurance. 3. A SHARED PROBLEM-ORIENTED PATIENT RECORD. The core of the CPR system is the problem-oriented patient record. All problems of one patient, recorded by different members of the care team, are displayed on the problem list. Starting from this list, a problem follow-up can be made, one problem at a time or for several problems simultaneously. Thus, it is possible to get an integrated view, across provider categories, of those problems of one patient that belong together. This shared problem-oriented patient record provides an important basis for the primary care team work. 4. INTEGRATED DECISION SUPPORT. The decision support of the system includes a drug prescribing module and a care protocol module. The drug prescribing module is integrated with the patient records and includes an on-line check of the patient's medication list for potential interactions and data-driven reminders concerning major drug problems. Care protocols have been developed for the most common chronic diseases, such as asthma, diabetes, and hypertension. The patient records can be automatically checked according to the care protocols. 5. PRACTICAL EXPERIENCE. The Swedestar system has been implemented in a primary care area with 30,000 inhabitants. It is being used by all the primary care team members: 15 general practitioners, 25 district nurses, and 10 physiotherapists. Several years of practical experience of the CPR system shows that it has a positive impact on quality of care on four levels: 1) improved clinical follow-up of individual patients; 2) facilitated follow-up of aggregated data such as practice activity analysis, annual reports, and clinical indicators; 3) automated medical audit; and 4) concurrent audit. Within that primary care area, quality of care has improved substantially in several aspects due to the use of the CPR system [1].

  15. Automatic system of collection of parameters and control of receiving equipment of the radiotelescope of VLBI complex "Quasar "

    NASA Astrophysics Data System (ADS)

    Syrovoy, Sergey

    At present the radiointerferometry with Very Long Bases (VLBI) is more and more globalized, turning into the world network of observation posts. So the inclusion of the developing Russian system "Quasar" into the world VLBI community has a great importance to us. The important role in the work of radiotelescope as a part of VLBI network belongs to a question of ensuring the optimal interaction of the its sub-systems, which can only be done by means of automation of the whole process of observation. The possibility of participation of RTF-32 in the international VLBI sessions observation is taken into account in the system development. These observations have the stable technology of experiments on the base Mark-IV Field System. In this paper the description, the structured and the functional schemes of the system of automatic collection of parameters and control of receiving complex of radiotelescope RTF-32 are given. This system is to solve the given problem. The most important tasks of the system being developed are the ensuring of distant checking and control of the following systems of the radiotelescope: 1. the receivers system, which consists of the five dual-channel radiometers 21-18 sm, 13 sm, 6 sm, 3.5 sm, 1.35 sm brands; 2. the radiotelescope pointing system; 3. the frequency-time synchronizing system, which consists of the hydrogen standard of frequency, the system of ultrahigh frequency oscillators and the generators of picosecond impulses; 4. the signal transformation system; 5. the signal registration system; 6. the system of measurement of electrical features of atmosphere; 7. the power supply system. The part of the automatic system, ensuring the distant checking and control of the radiotelescope pointing system both in the local mode and in the state of working under control the Field System computer, was put into operation and is functioning at this moment. Now the part of the automatic system ensuring the checking and control of receiving system of radiotelescope is being developed. The functional scheme has been designed. The experimental model of the device of connection of control PC with the terminal has been produced. The algorithms of receiver control in the different modes of observation have been developed. The questions of interaction with the computer Field System have been solved. The radiotelescope RTF-32 is capable of functioning in two modes such as radio-astronomical and radio-interferometrical. The control of the transformation signal system and the registration signal system in these modes is different and is entrusted with the Field System computer. The automation of collection of the meteorological data and parameters of the power supply system of the radiotelescope is last stage of the development of the presented system.

  16. Helping You Choose Quality Ambulatory Care

    MedlinePlus

    Helping you choose: Quality ambulatory care When you need ambulatory care, you should find out some information to help you choose the best ... the center follows rules for patient safety and quality. Go to Quality Check ® at www. qualitycheck. org ...

  17. Helping You Choose Quality Hospice Care

    MedlinePlus

    Helping you choose: Quality hospice care When you need hospice care, you should find out some information to help you choose the best ... the service follows rules for patient safety and quality. Go to Quality Check ® at www. qualitycheck. org ...

  18. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  19. Automatic programming of arc welding robots

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Srikanth

    Automatic programming of arc welding robots requires the geometric description of a part from a solid modeling system, expert weld process knowledge and the kinematic arrangement of the robot and positioner automatically. Current commercial solid models are incapable of storing explicitly product and process definitions of weld features. This work presents a paradigm to develop a computer-aided engineering environment that supports complete weld feature information in a solid model and to create an automatic programming system for robotic arc welding. In the first part, welding features are treated as properties or attributes of an object, features which are portions of the object surface--the topological boundary. The structure for representing the features and attributes is a graph called the Welding Attribute Graph (WAGRAPH). The method associates appropriate weld features to geometric primitives, adds welding attributes, and checks the validity of welding specifications. A systematic structure is provided to incorporate welding attributes and coordinate system information in a CSG tree. The specific implementation of this structure using a hybrid solid modeler (IDEAS) and an object-oriented programming paradigm is described. The second part provides a comprehensive methodology to acquire and represent weld process knowledge required for the proper selection of welding schedules. A methodology of knowledge acquisition using statistical methods is proposed. It is shown that these procedures did little to capture the private knowledge of experts (heuristics), but helped in determining general dependencies, and trends. A need was established for building the knowledge-based system using handbook knowledge and to allow the experts further to build the system. A methodology to check the consistency and validity for such knowledge addition is proposed. A mapping shell designed to transform the design features to application specific weld process schedules is described. A new approach using fixed path modified continuation methods is proposed in the final section to plan continuously the trajectory of weld seams in an integrated welding robot and positioner environment. The joint displacement, velocity, and acceleration histories all along the path as a function of the path parameter for the best possible welding condition are provided for the robot and the positioner to track various paths normally encountered in arc welding.

  20. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  1. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  2. Variations in Daily Sleep Quality and Type 1 Diabetes Management in Late Adolescents

    PubMed Central

    Queen, Tara L.; Butner, Jonathan; Wiebe, Deborah; Berg, Cynthia A.

    2016-01-01

    Objective To determine how between- and within-person variability in perceived sleep quality were associated with adolescent diabetes management. Methods A total of 236 older adolescents with type 1 diabetes reported daily for 2 weeks on sleep quality, self-regulatory failures, frequency of blood glucose (BG) checks, and BG values. Average, inconsistent, and daily deviations in sleep quality were examined. Results Hierarchical linear models indicated that poorer average and worse daily perceived sleep quality (compared with one’s average) was each associated with more self-regulatory failures. Sleep quality was not associated with frequency of BG checking. Poorer average sleep quality was related to greater risk of high BG. Furthermore, inconsistent and daily deviations in sleep quality interacted to predict higher BG, with more consistent sleepers benefitting more from a night of high-quality sleep. Conclusions Good, consistent sleep quality during late adolescence may benefit diabetes management by reducing self-regulatory failures and risk of high BG. PMID:26994852

  3. Austrian Daily Climate Data Rescue and Quality Control

    NASA Astrophysics Data System (ADS)

    Jurkovic, A.; Lipa, W.; Adler, S.; Albenberger, J.; Lechner, W.; Swietli, R.; Vossberg, I.; Zehetner, S.

    2010-09-01

    Checked climate datasets are a "conditio sine qua non" for all projects that are relevant for environment and climate. In the framework of climate change studies and analysis it is essential to work with quality controlled and trustful data. Furthermore these datasets are used as input for various simulation models. In regard to investigations of extreme events, like strong precipitation periods, drought periods and similar ones we need climate data in high temporal resolution (at least in daily resolution). Because of the historical background - during Second World War the majority of our climate sheets were sent to Berlin, where the historical sheets were destroyed by a bomb attack and so important information got lost - only several climate sheets, mostly duplicates, before 1939 are available and stored in our climate data archive. In 1970 the Central Institute for Meteorology and Geodynamics in Vienna started a first attempt to digitize climate data by means of punch cards. With the introduction of a routinely climate data quality control in 1984 we can speak of high-class-checked daily data (finally checked data, quality flag 6). Our group is working on the processing of digitization and quality control of the historical data for the period 1872 to 1983 for 18 years. Since 2007 it was possible to intensify the work (processes) in the framework of an internal project, namely Austrian Climate Data Rescue and Quality Control. The aim of this initiative was - and still is - to supply daily data in an outstanding good and uniform quality. So this project is a kind of pre-project for all scientific projects which are working with daily data. In addition to routine quality checks (that are running since 1984) using the commercial Bull Software we are testing our data with additional open source software, namely ProClim.db. By the use of this spatial and statistical test procedure, the elements air temperature and precipitation - for several sites in Carinthia - could already be checked, flagged and corrected. Checking the output (so called- error list) of ProClim is very time consuming and needs trained staff; however, in last instance it is necessary. Due to the guideline "Your archive is your business card for quality" the sub-project NEW ARCHIVE was initialized and started at the end of 2009. Our paper archive contains historical, up to 150 year-old, climate sheets that are valuable cultural assets. Unfortunately the storage of these historical and actual data treasures turned out to be more than suboptimal (insufficient protection against dust, dirt, humidity and light incidence). Because of this fact a concept for a new storage system and archive database was generated and already partly realized. In a nutshell this presentation shows on the one hand the importance of recovering historical climate sheets for climate change research - even if it is exhausting and time consuming - and gives on the other hand a general overview of used quality control procedures at our institute.

  4. Verification using Satisfiability Checking, Predicate Abstraction, and Craig Interpolation

    DTIC Science & Technology

    2008-09-01

    297, 2007. 4.10.1 196 [48] Roberto Bruttomesso, Alessandro Cimatti, Anders Franzen, Alberto Grig- gio, Ziyad Hanna, Alexander Nadel, Amit Palti, and...using SAT based conflict analysis. In Formal Methods in Computer Aided Design, pages 33–51, 2002. 1.1, 7 [54] Alessandro Cimatti, Alberto Griggio, and...and D. Vroon. Automatic memory reductions for RTL-level verification. In ICCAD, 2006. 1.2.4, 6.2, 7 [108] Joao P. Marques-Silva and Karem A. Sakallah

  5. Fault-tolerant three-level inverter

    DOEpatents

    Edwards, John; Xu, Longya; Bhargava, Brij B.

    2006-12-05

    A method for driving a neutral point clamped three-level inverter is provided. In one exemplary embodiment, DC current is received at a neutral point-clamped three-level inverter. The inverter has a plurality of nodes including first, second and third output nodes. The inverter also has a plurality of switches. Faults are checked for in the inverter and predetermined switches are automatically activated responsive to a detected fault such that three-phase electrical power is provided at the output nodes.

  6. The Implementation of IS on the Knowledge Management and Mental Models at the Decision-Making Process

    DTIC Science & Technology

    2011-03-01

    Management entities can also poll end stations to check the values of certain variables. Polling can be automatic or user initiated, but agents in the...the environment in which knowledge artifacts are created and managed , but the flow of knowledge itself remains almost indirect. For example... interaction of these loops with one another. Thus, the cognitive nature of feedback loops correlates the entity with the environment that the decision was

  7. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  8. C-MOS array design techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1978-01-01

    The entire complement of standard cells and components, except for the set-reset flip-flop, was completed. Two levels of checking were performed on each device. Logic cells and topological layout are described. All the related computer programs were coded and one level of debugging was completed. The logic for the test chip was modified and updated. This test chip served as the first test vehicle to exercise the standard cell complementary MOS(C-MOS) automatic artwork generation capability.

  9. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  10. Lot quality assurance sampling of sputum acid-fast bacillus smears for assessing sputum smear microscopy centers.

    PubMed

    Selvakumar, N; Murthy, B N; Prabhakaran, E; Sivagamasundari, S; Vasanthan, Samuel; Perumal, M; Govindaraju, R; Chauhan, L S; Wares, Fraser; Santha, T; Narayanan, P R

    2005-02-01

    Assessment of 12 microscopy centers in a tuberculosis unit by blinded checking of eight sputum smears selected by using a lot quality assurance sampling (LQAS) method and by unblinded checking of all positive and five negative slides, among the slides examined in a month in a microscopy centre, revealed that the LQAS method can be implemented in the field to monitor the performance of acid-fast bacillus microscopy centers in national tuberculosis control programs.

  11. Lot Quality Assurance Sampling of Sputum Acid-Fast Bacillus Smears for Assessing Sputum Smear Microscopy Centers

    PubMed Central

    Selvakumar, N.; Murthy, B. N.; Prabhakaran, E.; Sivagamasundari, S.; Vasanthan, Samuel; Perumal, M.; Govindaraju, R.; Chauhan, L. S.; Wares, Fraser; Santha, T.; Narayanan, P. R.

    2005-01-01

    Assessment of 12 microscopy centers in a tuberculosis unit by blinded checking of eight sputum smears selected by using a lot quality assurance sampling (LQAS) method and by unblinded checking of all positive and five negative slides, among the slides examined in a month in a microscopy centre, revealed that the LQAS method can be implemented in the field to monitor the performance of acid-fast bacillus microscopy centers in national tuberculosis control programs. PMID:15695704

  12. Lake water quality mapping from Landsat

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.

    1977-01-01

    In the project described remote sensing was used to check the quality of lake waters. The lakes of three Landsat scenes were mapped with the Bendix MDAS multispectral analysis system. From the MDAS color coded maps, the lake with the worst algae problem was easily located. The lake was closely checked, and the presence of 100 cows in the springs which fed the lake could be identified as the pollution source. The laboratory and field work involved in the lake classification project is described.

  13. Automatically-computed prehospital severity scores are equivalent to scores based on medic documentation.

    PubMed

    Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques

    2008-10-01

    Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.

  14. Member Checking: A Tool to Enhance Trustworthiness or Merely a Nod to Validation?

    PubMed

    Birt, Linda; Scott, Suzanne; Cavers, Debbie; Campbell, Christine; Walter, Fiona

    2016-06-22

    The trustworthiness of results is the bedrock of high quality qualitative research. Member checking, also known as participant or respondent validation, is a technique for exploring the credibility of results. Data or results are returned to participants to check for accuracy and resonance with their experiences. Member checking is often mentioned as one in a list of validation techniques. This simplistic reporting might not acknowledge the value of using the method, nor its juxtaposition with the interpretative stance of qualitative research. In this commentary, we critique how member checking has been used in published research, before describing and evaluating an innovative in-depth member checking technique, Synthesized Member Checking. The method was used in a study with patients diagnosed with melanoma. Synthesized Member Checking addresses the co-constructed nature of knowledge by providing participants with the opportunity to engage with, and add to, interview and interpreted data, several months after their semi-structured interview. © The Author(s) 2016.

  15. 42 CFR 493.1254 - Standard: Maintenance and function checks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Maintenance and function checks. 493.1254 Section 493.1254 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived...

  16. [CompuRecord--A perioperative information management-system for anesthesia].

    PubMed

    Martin, J; Ederle, D; Milewski, P

    2002-08-01

    Since 1977 procedures for automatic documentation of anesthesias have repeatedly been described. Because of a limited arrangement of the desk top and because of its focussing on intraoperative documentation only a widespread introduction could not be established so far. Todays systems are offered with graphically orientated desktops which can be operated by intuition. The CompuRecord(R)-System (Philips Healthcare) is a perioperative management system for anaesthesia. It is constructed with modular components, recording the complete anaesthesiological care of a patient from preanaesthesiological assessment to the recovery room. Additional modules allow an economical check, provide for quality management and exportation of a core data base. Except for the original software all other components of the system including the net work components are IT standard products allowing reduced costs for supplementation, expansion and support. The advantage of an automatical documentation system of anaesthesia is frequent and detailed recording of anaesthesiological data as well as the possibility of a meticulous calculation of cost for each patient. The anaesthesiologist's time used for documentation is reduced remarkably with a limited and reasonable amount of data to be recorded. This leaves more time of attention for the patient himself. Time necessary for training is kept low with the touch screens of the CompuRecord(R) - System, which can be operated intuitively. Primary to purchase an exact analysis of process and of subsequent costs should be done. Standardized documentation allows to establish Standard Operating Procedures in a department of Anaesthesia. Using the given systems an implementation is possible already today despite restricted resources of man power.

  17. A high-throughput urinalysis of abused drugs based on a SPE-LC-MS/MS method coupled with an in-house developed post-analysis data treatment system.

    PubMed

    Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen

    2006-10-16

    A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.

  18. An Adaptive Buddy Check for Observational Quality Control

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.

  19. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  20. Self-evaluated automatic classifier as a decision-support tool for sleep/wake staging.

    PubMed

    Charbonnier, S; Zoubek, L; Lesecq, S; Chapotot, F

    2011-06-01

    An automatic sleep/wake stages classifier that deals with the presence of artifacts and that provides a confidence index with each decision is proposed. The decision system is composed of two stages: the first stage checks the 20s epoch of polysomnographic signals (EEG, EOG and EMG) for the presence of artifacts and selects the artifact-free signals. The second stage classifies the epoch using one classifier selected out of four, using feature inputs extracted from the artifact-free signals only. A confidence index is associated with each decision made, depending on the classifier used and on the class assigned, so that the user's confidence in the automatic decision is increased. The two-stage system was tested on a large database of 46 night recordings. It reached 85.5% of overall accuracy with improved ability to discern NREM I stage from REM sleep. It was shown that only 7% of the database was classified with a low confidence index, and thus should be re-evaluated by a physiologist expert, which makes the system an efficient decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. TeraSCREEN: multi-frequency multi-mode Terahertz screening for border checks

    NASA Astrophysics Data System (ADS)

    Alexander, Naomi E.; Alderman, Byron; Allona, Fernando; Frijlink, Peter; Gonzalo, Ramón; Hägelen, Manfred; Ibáñez, Asier; Krozer, Viktor; Langford, Marian L.; Limiti, Ernesto; Platt, Duncan; Schikora, Marek; Wang, Hui; Weber, Marc Andree

    2014-06-01

    The challenge for any security screening system is to identify potentially harmful objects such as weapons and explosives concealed under clothing. Classical border and security checkpoints are no longer capable of fulfilling the demands of today's ever growing security requirements, especially with respect to the high throughput generally required which entails a high detection rate of threat material and a low false alarm rate. TeraSCREEN proposes to develop an innovative concept of multi-frequency multi-mode Terahertz and millimeter-wave detection with new automatic detection and classification functionalities. The system developed will demonstrate, at a live control point, the safe automatic detection and classification of objects concealed under clothing, whilst respecting privacy and increasing current throughput rates. This innovative screening system will combine multi-frequency, multi-mode images taken by passive and active subsystems which will scan the subjects and obtain complementary spatial and spectral information, thus allowing for automatic threat recognition. The TeraSCREEN project, which will run from 2013 to 2016, has received funding from the European Union's Seventh Framework Programme under the Security Call. This paper will describe the project objectives and approach.

  2. Efficient Method of Achieving Agreements between Individuals and Organizations about RFID Privacy

    NASA Astrophysics Data System (ADS)

    Cha, Shi-Cho

    This work presents novel technical and legal approaches that address privacy concerns for personal data in RFID systems. In recent years, to minimize the conflict between convenience and the privacy risk of RFID systems, organizations have been requested to disclose their policies regarding RFID activities, obtain customer consent, and adopt appropriate mechanisms to enforce these policies. However, current research on RFID typically focuses on enforcement mechanisms to protect personal data stored in RFID tags and prevent organizations from tracking user activity through information emitted by specific RFID tags. A missing piece is how organizations can obtain customers' consent efficiently and flexibly. This study recommends that organizations obtain licenses automatically or semi-automatically before collecting personal data via RFID technologies rather than deal with written consents. Such digitalized and standard licenses can be checked automatically to ensure that collection and use of personal data is based on user consent. While individuals can easily control who has licenses and license content, the proposed framework provides an efficient and flexible way to overcome the deficiencies in current privacy protection technologies for RFID systems.

  3. Automatic sample Dewar for MX beam-line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charignon, T.; Tanchon, J.; Trollier, T.

    2014-01-29

    It is very common for crystals of large biological macromolecules to show considerable variation in quality of their diffraction. In order to increase the number of samples that are tested for diffraction quality before any full data collections at the ESRF*, an automatic sample Dewar has been implemented. Conception and performances of the Dewar are reported in this paper. The automatic sample Dewar has 240 samples capability with automatic loading/unloading ports. The storing Dewar is capable to work with robots and it can be integrated in a full automatic MX** beam-line. The samples are positioned in the front of themore » loading/unloading ports with and automatic rotating plate. A view port has been implemented for data matrix camera reading on each sample loaded in the Dewar. At last, the Dewar is insulated with polyurethane foam that keeps the liquid nitrogen consumption below 1.6 L/h. At last, the static insulation also makes vacuum equipment and maintenance unnecessary. This Dewar will be useful for increasing the number of samples tested in synchrotrons.« less

  4. 40 CFR 51.359 - Quality control.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Quality control. 51.359 Section 51.359 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR... to assure test accuracy. Computer control of quality assurance checks and quality control charts...

  5. Multi-capillary column-ion mobility spectrometry: a potential screening system to differentiate virgin olive oils.

    PubMed

    Garrido-Delgado, Rocío; Arce, Lourdes; Valcárcel, Miguel

    2012-01-01

    The potential of a headspace device coupled to multi-capillary column-ion mobility spectrometry has been studied as a screening system to differentiate virgin olive oils ("lampante," "virgin," and "extra virgin" olive oil). The last two types are virgin olive oil samples of very similar characteristics, which were very difficult to distinguish with the existing analytical method. The procedure involves the direct introduction of the virgin olive oil sample into a vial, headspace generation, and automatic injection of the volatiles into a gas chromatograph-ion mobility spectrometer. The data obtained after the analysis by duplicate of 98 samples of three different categories of virgin olive oils, were preprocessed and submitted to a detailed chemometric treatment to classify the virgin olive oil samples according to their sensory quality. The same virgin olive oil samples were also analyzed by an expert's panel to establish their category and use these data as reference values to check the potential of this new screening system. This comparison confirms the potential of the results presented here. The model was able to classify 97% of virgin olive oil samples in their corresponding group. Finally, the chemometric method was validated obtaining a percentage of prediction of 87%. These results provide promising perspectives for the use of ion mobility spectrometry to differentiate virgin olive oil samples according to their quality instead of using the classical analytical procedure.

  6. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  7. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    PubMed

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  8. NREL Solar Radiation Research Laboratory (SRRL): Baseline Measurement System (BMS); Golden, Colorado (Data)

    DOE Data Explorer

    Stoffel, T.; Andreas, A.

    1981-07-15

    The SRRL was established at the Solar Energy Research Institute (now NREL) in 1981 to provide continuous measurements of the solar resources, outdoor calibrations of pyranometers and pyrheliometers, and to characterize commercially available instrumentation. The SRRL is an outdoor laboratory located on South Table Mountain, a mesa providing excellent solar access throughout the year, overlooking Denver. Beginning with the basic measurements of global horizontal irradiance, direct normal irradiance and diffuse horizontal irradiance at 5-minute intervals, the SRRL Baseline Measurement System now produces more than 130 data elements at 1-min intervals that are available from the Measurement & Instrumentation Data Center Web site. Data sources include global horizontal, direct normal, diffuse horizontal (from shadowband and tracking disk), global on tilted surfaces, reflected solar irradiance, ultraviolet, infrared (upwelling and downwelling), photometric and spectral radiometers, sky imagery, and surface meteorological conditions (temperature, relative humidity, barometric pressure, precipitation, snow cover, wind speed and direction at multiple levels). Data quality control and assessment include daily instrument maintenance (M-F) with automated data quality control based on real-time examinations of redundant instrumentation and internal consistency checks using NREL's SERI-QC methodology. Operators are notified of equipment problems by automatic e-mail messages generated by the data acquisition and processing system. Radiometers are recalibrated at least annually with reference instruments traceable to the World Radiometric Reference (WRR).

  9. Safe and effective nursing shift handover with NURSEPASS: An interrupted time series.

    PubMed

    Smeulers, Marian; Dolman, Christine D; Atema, Danielle; van Dieren, Susan; Maaskant, Jolanda M; Vermeulen, Hester

    2016-11-01

    Implementation of a locally developed evidence based nursing shift handover blueprint with a bedside-safety-check to determine the effect size on quality of handover. A mixed methods design with: (1) an interrupted time series analysis to determine the effect on handover quality in six domains; (2) descriptive statistics to analyze the intercepted discrepancies by the bedside-safety-check; (3) evaluation sessions to gather experiences with the new handover process. We observed a continued trend of improvement in handover quality and a significant improvement in two domains of handover: organization/efficiency and contents. The bedside-safety-check successfully identified discrepancies on drains, intravenous medications, bandages or general condition and was highly appreciated. Use of the nursing shift handover blueprint showed promising results on effectiveness as well as on feasibility and acceptability. However, to enable long term measurement on effectiveness, evaluation with large scale interrupted times series or statistical process control is needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. 77 FR 67344 - Proposed Information Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... Criminal History Checks. DATES: Written comments must be submitted to the individual and office listed in... methodology and assumptions used; Enhance the quality, utility, and clarity of the information to be collected... Criminal History Check. CNCS and its grantees must ensure that national service beneficiaries are protected...

  11. The KATE shell: An implementation of model-based control, monitor and diagnosis

    NASA Technical Reports Server (NTRS)

    Cornell, Matthew

    1987-01-01

    The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.

  12. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  13. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  14. Langley Wind Tunnel Data Quality Assurance-Check Standard Results

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.

    2000-01-01

    A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.

  15. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  16. Microbiological water methods: quality control measures for Federal Clean Water Act and Safe Drinking Water Act regulatory compliance.

    PubMed

    Root, Patsy; Hunt, Margo; Fjeld, Karla; Kundrat, Laurie

    2014-01-01

    Quality assurance (QA) and quality control (QC) data are required in order to have confidence in the results from analytical tests and the equipment used to produce those results. Some AOAC water methods include specific QA/QC procedures, frequencies, and acceptance criteria, but these are considered to be the minimum controls needed to perform a microbiological method successfully. Some regulatory programs, such as those at Code of Federal Regulations (CFR), Title 40, Part 136.7 for chemistry methods, require additional QA/QC measures beyond those listed in the method, which can also apply to microbiological methods. Essential QA/QC measures include sterility checks, reagent specificity and sensitivity checks, assessment of each analyst's capabilities, analysis of blind check samples, and evaluation of the presence of laboratory contamination and instrument calibration and checks. The details of these procedures, their performance frequency, and expected results are set out in this report as they apply to microbiological methods. The specific regulatory requirements of CFR Title 40 Part 136.7 for the Clean Water Act, the laboratory certification requirements of CFR Title 40 Part 141 for the Safe Drinking Water Act, and the International Organization for Standardization 17025 accreditation requirements under The NELAC Institute are also discussed.

  17. Managed aquifer recharge by a check dam to improve the quality of fluoride-rich groundwater: a case study from southern India.

    PubMed

    Gowrisankar, G; Jagadeshan, G; Elango, L

    2017-04-01

    In many regions around the globe, including India, degradation in the quality of groundwater is of great concern. The objective of this investigation is to determine the effect of recharge from a check dam on quality of groundwater in a region of Krishnagiri District of Tamil Nadu State, India. For this study, water samples from 15 wells were periodically obtained and analysed for major ions and fluoride concentrations. The amount of major ions present in groundwater was compared with the drinking water guideline values of the Bureau of Indian Standards. With respect to the sodium and fluoride concentrations, 38% of groundwater samples collected was not suitable for direct use as drinking water. Suitability of water for agricultural use was determined considering the electrical conductivity, sodium adsorption ratio, sodium percentage, permeability index, Wilcox and United States Salinity Laboratory diagrams. The influence of freshwater recharge from the dam is evident as the groundwater in wells nearer to the check dam was suitable for both irrigation and domestic purposes. However, the groundwater away from the dam had a high ionic composition. This study demonstrated that in other fluoride-affected areas, the concentration can be reduced by dilution with the construction of check dams as a measure of managed aquifer recharge.

  18. Quality Work, Quality Control in Technical Services.

    ERIC Educational Resources Information Center

    Horny, Karen L.

    1985-01-01

    Quality in library technical services is explored in light of changes produced by automation. Highlights include a definition of quality; new opportunities and shifting priorities; cataloging (fullness of records, heading consistency, accountability, local standards, automated checking); need for new skills (management, staff); and boons of…

  19. IMPLEMENTATION AND VALIDATION OF STATISTICAL TESTS IN RESEARCH'S SOFTWARE HELPING DATA COLLECTION AND PROTOCOLS ANALYSIS IN SURGERY.

    PubMed

    Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas

    2016-03-01

    The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.

  20. Presentation of the results of a Bayesian automatic event detection and localization program to human analysts

    NASA Astrophysics Data System (ADS)

    Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.

    2016-12-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.

  1. A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC.

    PubMed

    Kors, Jan A; Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich

    2015-09-01

    To create a multilingual gold-standard corpus for biomedical concept recognition. We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  2. Direct to consumer advertising via the Internet, a study of hip resurfacing.

    PubMed

    Ogunwale, B; Clarke, J; Young, D; Mohammed, A; Patil, S; Meek, R M D

    2009-02-01

    With increased use of the internet for health information and direct to consumer advertising from medical companies, there is concern about the quality of information available to patients. The aim of this study was to examine the quality of health information on the internet for hip resurfacing. An assessment tool was designed to measure quality of information. Websites were measured on credibility of source; usability; currentness of the information; content relevance; content accuracy/completeness and disclosure/bias. Each website assessed was given a total score, based on number of scores achieved from the above categories websites were further analysed on author, geographical origin and possession of an independent credibility check. There was positive correlation between the overall score for the website and the score of each website in each assessment category. Websites by implant companies, doctors and hospitals scored poorly. Websites with an independent credibility check such as Health on the Net (HoN) scored twice the total scores of websites without. Like other internet health websites, the quality of information on hip resurfacing websites is variable. This study highlights methods by which to assess the quality of health information on the internet and advocates that patients should look for a statement of an "independent credibility check" when searching for information on hip resurfacing.

  3. The performance of an automatic acoustic-based program classifier compared to hearing aid users' manual selection of listening programs.

    PubMed

    Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias

    2018-03-01

    To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.

  4. A device for testing cables

    NASA Technical Reports Server (NTRS)

    Hayhurst, Arthur Ray (Inventor)

    1993-01-01

    A device for testing current paths is attachable to a conductor. The device automatically checks the current paths of the conductor for continuity of a center conductor, continuity of a shield, and a short circuit between the shield and the center conductor. The device includes a pair of connectors and a circuit to provide for testing of the conductive paths of a cable to be tested with the circuit paths of the circuit. The circuit paths in the circuit include indicators to simultaneously indicate the results of the testing.

  5. Solving the AI Planning Plus Scheduling Problem Using Model Checking via Automatic Translation from the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL)

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    This paper describes a translator from a new planning language named the Abstract Plan Preparation Language (APPL) to the Symbolic Analysis Laboratory (SAL) model checker. This translator has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project sponsored by the Exploration Technology Development Program, which is seeking to mature autonomy technology for the vehicles and operations centers of Project Constellation.

  6. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  7. RAND’s Portfolio Analysis Tool (PAT): Theory, Methods, and Reference Manual

    DTIC Science & Technology

    2009-01-01

    language , such as Analytica® (a product of Lumina Decision Systems, Inc., [www.lumina.com]). We used such a connection approach in our work for MDA...with $ signs ) so that the formulas will be automatically adjusted if they change PAT’s structure by, e.g., adding a column or row. Checking is...year, in real (inflation-protected) dollars. The sign is positive or negative, depending on whether one is receiving or paying and on the syntax of

  8. Assessment of automatic exposure control performance in digital mammography using a no-reference anisotropic quality index

    NASA Astrophysics Data System (ADS)

    Barufaldi, Bruno; Borges, Lucas R.; Bakic, Predrag R.; Vieira, Marcelo A. C.; Schiabel, Homero; Maidment, Andrew D. A.

    2017-03-01

    Automatic exposure control (AEC) is used in mammography to obtain acceptable radiation dose and adequate image quality regardless of breast thickness and composition. Although there are physics methods for assessing the AEC, it is not clear whether mammography systems operate with optimal dose and image quality in clinical practice. In this work, we propose the use of a normalized anisotropic quality index (NAQI), validated in previous studies, to evaluate the quality of mammograms acquired using AEC. The authors used a clinical dataset that consists of 561 patients and 1,046 mammograms (craniocaudal breast views). The results show that image quality is often maintained, even at various radiation levels (mean NAQI = 0.14 +/- 0.02). However, a more careful analysis of NAQI reveals that the average image quality decreases as breast thickness increases. The NAQI is reduced by 32% on average, when the breast thickness increases from 31 to 71 mm. NAQI also decreases with lower breast density. The variation in breast parenchyma alone cannot fully account for the decrease of NAQI with thickness. Examination of images shows that images of large, fatty breasts are often inadequately processed. This work shows that NAQI can be applied in clinical mammograms to assess mammographic image quality, and highlights the limitations of the automatic exposure control for some images.

  9. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  10. Improving the Quality of Welding Seam of Automatic Welding of Buckets Based on TCP

    NASA Astrophysics Data System (ADS)

    Hu, Min

    2018-02-01

    Since February 2014, the welding defects of the automatic welding line of buckets have been frequently appeared. The average repair time of each bucket is 26min, which seriously affects the production efficiency and welding quality. We conducted troubleshooting, and found the main reasons for the welding defects of the buckets were the deviations of the center points of the robot tools and the poor quality of the locating welding. We corrected the gripper, welding torch, and accuracy of repeat positioning of robots to control the quality of positioning welding. The welding defect rate of buckets was reduced greatly, ensuring the production efficiency and welding quality.

  11. Blood Pressure Checker

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An estimated 30 million people in the United States have high blood pressure, or hypertension. But a great many of them are unaware of it because hypertension, in its initial stages, displays no symptoms. Thus, the simply-operated blood pressure checking devices now widely located in public places are useful health aids. The one pictured above, called -Medimax 30, is a direct spinoff from NASA technology developed to monitor astronauts in space. For manned space flights, NASA wanted a compact, highly-reliable, extremely accurate method of checking astronauts' blood pressure without the need for a physician's interpretive skill. NASA's Johnson Space Center and Technology, Inc., a contractor, developed an electronic sound processor that automatically analyzes blood flow sounds to get both systolic (contracting arteries) and diastolic (expanding arteries) blood pressure measurements. NASA granted a patent license for this technology to Advanced Life Sciences, Inc., New York City, manufacturers of Medimax 30.

  12. Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified

    NASA Technical Reports Server (NTRS)

    Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.

    2005-01-01

    Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.

  13. Automatic welding of stainless steel tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    The use of automatic welding for making girth welds in stainless steel tubing was investigated as well as the reduction in fabrication costs resulting from the elimination of radiographic inspection. Test methodology, materials, and techniques are discussed, and data sheets for individual tests are included. Process variables studied include welding amperes, revolutions per minute, and shielding gas flow. Strip chart recordings, as a definitive method of insuring weld quality, are studied. Test results, determined by both radiographic and visual inspection, are presented and indicate that once optimum welding procedures for specific sizes of tubing are established, and the welding machine operations are certified, then the automatic tube welding process produces good quality welds repeatedly, with a high degree of reliability. Revised specifications for welding tubing using the automatic process and weld visual inspection requirements at the Kennedy Space Center are enumerated.

  14. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  15. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  16. Possibilities and limits of Internet-based registers.

    PubMed

    Wild, Michael; Candrian, Aron; Wenda, Klaus

    2009-03-01

    The Internet is an inexpensive platform for the investigation of medical questions in case of low prevalence. By accessing www.ao-nailregister.org, every interested participant may participate in the English-language survey of the complications specific to the femoral nail. The address data of the participant, the anonymised key data of the patients and the medical parameters are entered. In real time, these data are checked for plausibility, evaluated and published on the Internet where they are freely accessible immediately. Because of national differences, data acquisition caused considerable difficulties at the beginning. In addition, wrong data were entered because of linguistic or contextual misunderstandings. After having reworked the questionnaire completely, facilitating data input and implementing an automated plausibility check, these difficulties could be cleared. In a next step, the automatic evaluation of the data was implemented. Only very few data still have to be checked for plausibility manually to exclude wrong entries, which cannot be verified by the computer. The effort required for data acquisition and evaluation of the Internet-based femoral nail register was reduced distinctly. The possibility of free international participation as well as the freely accessible representation of the results offers transparency.

  17. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  18. Quality Evaluation of Coatings by Automatic Scratch Testing

    DTIC Science & Technology

    1989-11-01

    MTL TR 89-98 IADII QUALITY EVALUATION OF COATINGS BY AUTOMATIC SCRATCH TESTING KIRIT J. BHANSALI LBRTR A1 U.S. ARMY MATERIALS TECHNOLOGY LABORATORY...distribution unlimited. LABORATORY COMMANO U.S. ARMY MATERIALS TECHNOLOGY LABORATORY PMUNKS wcamaauv LUaAMUv Watertown, Massachusetts 02172-0001 .o...Theo 7- Kattamis* 9 PEWNWING ORGANIZATION NAME AMO ADDRESS 1.PORUEEET RJC.TS AREA & WORK UNIT NUMSS U.S. Army Materials Technology Laboratory Watertown

  19. Is a quasi-3D dosimeter better than a 2D dosimeter for Tomotherapy delivery quality assurance?

    NASA Astrophysics Data System (ADS)

    Xing, Aitang; Deshpande, Shrikant; Arumugam, Sankar; George, Armia; Holloway, Lois; Vial, Philip; Goozee, Gary

    2015-01-01

    Delivery quality assurance (DQA) has been performed for each Tomotherapy patient either using ArcCHECK or MatriXX Evolution in our clinic since 2012. ArcCHECK is a quasi-3D dosimeter whereas MatriXX is a 2D detector. A review of DQA results was performed for all patients in the last three years, a total of 221 DQA plans. These DQA plans came from 215 patients with a variety of treatment sites including head-neck, pelvis, and chest wall. The acceptable Gamma pass rate in our clinic is over 95% using 3mm and 3% of maximum planned dose with 10% dose threshold. The mean value and standard deviation of Gamma pass rates were 98.2% ± 1.98(1SD) for MatriXX and 98.5%±1.88 (1SD) for ArcCHECK. A paired t-test was also performed for the groups of patients whose DQA was performed with both the ArcCHECK and MatriXX. No statistical dependence was found in terms of the Gamma pass rate for ArcCHECK and MatriXX. The considered 3D and 2D dosimeters have achieved similar results in performing routine patient-specific DQA for patients treated on a TomoTherapy unit.

  20. Automatic assessment of the quality of patient positioning in mammography

    NASA Astrophysics Data System (ADS)

    Bülow, Thomas; Meetz, Kirsten; Kutra, Dominik; Netsch, Thomas; Wiemker, Rafael; Bergtholdt, Martin; Sabczynski, Jörg; Wieberneit, Nataly; Freund, Manuela; Schulze-Wenck, Ingrid

    2013-02-01

    Quality assurance has been recognized as crucial for the success of population-based breast cancer screening programs using x-ray mammography. Quality guidelines and criteria have been defined in the US as well as the European Union in order to ensure the quality of breast cancer screening. Taplin et al. report that incorrect positioning of the breast is the major image quality issue in screening mammography. Consequently, guidelines and criteria for correct positioning and for the assessment of the positioning quality in mammograms play an important role in the quality standards. In this paper we present a system for the automatic evaluation of positioning quality in mammography according to the existing standardized criteria. This involves the automatic detection of anatomic landmarks in medio- lateral oblique (MLO) and cranio-caudal (CC) mammograms, namely the pectoral muscle, the mammilla and the infra-mammary fold. Furthermore, the detected landmarks are assessed with respect to their proper presentation in the image. Finally, the geometric relations between the detected landmarks are investigated to assess the positioning quality. This includes the evaluation whether the pectoral muscle is imaged down to the mammilla level, and whether the posterior nipple line diameter of the breast is consistent between the different views (MLO and CC) of the same breast. Results of the computerized assessment are compared to ground truth collected from two expert readers.

  1. Check out the Atmospheric Science User Forum

    Atmospheric Science Data Center

    2016-11-16

    Check out the Atmospheric Science User Forum Tuesday, November 15, 2016 The ASDC would like to bring your attention to the Atmospheric Science User Forum. The purpose of this forum is to improve user service, quality, and efficiency of NASA atmospheric science data. The forum intends to provide a quick and easy way to facilitate ...

  2. Superiority of automatic remote monitoring compared with in-person evaluation for scheduled ICD follow-up in the TRUST trial - testing execution of the recommendations.

    PubMed

    Varma, Niraj; Michalski, Justin; Stambler, Bruce; Pavri, Behzad B

    2014-05-21

    To test recommended implantable cardioverter defibrillator (ICD) follow-up methods by 'in-person evaluations' (IPE) vs. 'remote Home Monitoring' (HM). ICD patients were randomized 2:1 to automatic HM or to Conventional monitoring, with follow-up checks scheduled at 3, 6, 9, 12, and 15 months post-implant. Conventional patients were evaluated with IPE only. Home Monitoring patients were assessed remotely only for 1 year between 3 and 15 month evaluations. Adherence to follow-up was measured. HM and Conventional patients were similar (age 63 years, 72% male, left ventricular ejection fraction 29%, primary prevention 73%, DDD 57%). Conventional management suffered greater patient attrition during the trial (20.1 vs. 14.2% in HM, P = 0.007). Three month follow-up occurred in 84% in both groups. There was 100% adherence (5 of 5 checks) in 47.3% Conventional vs. 59.7% HM (P < 0.001). Between 3 and 15 months, HM exhibited superior (2.2×) adherence to scheduled follow-up [incidence of failed follow up was 146 of 2421 (6.0%) in HM vs. 145 of 1098 (13.2%) in Conventional, P < 0.001] and punctuality. In HM (daily transmission success rate median 91%), transmission loss caused only 22 of 2275 (0.97%) failed HM evaluations between 3 and 15 months; others resulted from clinic oversight. Overall IPE failure rate in Conventional [193 of 1841 (10.5%) exceeded that in HM [97 of 1484 (6.5%), P < 0.001] by 62%, i.e. HM patients remained more loyal to IPE when this was mandated. Automatic remote monitoring better preserves patient retention and adherence to scheduled follow-up compared with IPE. NCT00336284. © The Author 2014. Published by Oxford University Press on behalf of the European Society of Cardiology.

  3. Helical tomotherapy quality assurance with ArcCHECK.

    PubMed

    Chapman, David; Barnett, Rob; Yartsev, Slav

    2014-01-01

    To design a quality assurance (QA) procedure for helical tomotherapy that measures multiple beam parameters with 1 delivery and uses a rotating gantry to simulate treatment conditions. The customized QA procedure was preprogrammed on the tomotherapy operator station. The dosimetry measurements were performed using an ArcCHECK diode array and an A1SL ion chamber inserted in the central holder. The ArcCHECK was positioned 10cm above the isocenter so that the 21-cm diameter detector array could measure the 40-cm wide tomotherapy beam. During the implementation of the new QA procedure, separate comparative measurements were made using ion chambers in both liquid and solid water, the tomotherapy onboard detector array, and a MapCHECK diode array for a period of 10 weeks. There was good agreement (within 1.3%) for the beam output and cone ratio obtained with the new procedure and the routine QA measurements. The measured beam energy was comparable (0.3%) to solid water measurement during the 10-week evaluation period, excluding 2 of the 10 measurements with unusually high background. The symmetry reading was similarly compromised for those 2 weeks, and on the other weeks, it deviated from the solid water reading by ~2.5%. The ArcCHECK phantom presents a suitable alternative for performing helical tomotherapy QA, provided the background is collected properly. The proposed weekly procedure using ArcCHECK and water phantom makes the QA process more efficient. Copyright © 2014 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  4. Helical tomotherapy quality assurance with ArcCHECK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, David; Barnett, Rob; Yartsev, Slav, E-mail: slav.yartsev@lhsc.on.ca

    2014-07-01

    To design a quality assurance (QA) procedure for helical tomotherapy that measures multiple beam parameters with 1 delivery and uses a rotating gantry to simulate treatment conditions. The customized QA procedure was preprogrammed on the tomotherapy operator station. The dosimetry measurements were performed using an ArcCHECK diode array and an A1SL ion chamber inserted in the central holder. The ArcCHECK was positioned 10 cm above the isocenter so that the 21-cm diameter detector array could measure the 40-cm wide tomotherapy beam. During the implementation of the new QA procedure, separate comparative measurements were made using ion chambers in both liquidmore » and solid water, the tomotherapy onboard detector array, and a MapCHECK diode array for a period of 10 weeks. There was good agreement (within 1.3%) for the beam output and cone ratio obtained with the new procedure and the routine QA measurements. The measured beam energy was comparable (0.3%) to solid water measurement during the 10-week evaluation period, excluding 2 of the 10 measurements with unusually high background. The symmetry reading was similarly compromised for those 2 weeks, and on the other weeks, it deviated from the solid water reading by ∼2.5%. The ArcCHECK phantom presents a suitable alternative for performing helical tomotherapy QA, provided the background is collected properly. The proposed weekly procedure using ArcCHECK and water phantom makes the QA process more efficient.« less

  5. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  6. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  7. Quality assurance of weather data for agricultural system model input

    USDA-ARS?s Scientific Manuscript database

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  8. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Efficient Verification of Holograms Using Mobile Augmented Reality.

    PubMed

    Hartl, Andreas Daniel; Arth, Clemens; Grubert, Jens; Schmalstieg, Dieter

    2016-07-01

    Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users.

  10. Automatic external filling for the ion source gas bottle of a Van de Graaff accelerator

    NASA Astrophysics Data System (ADS)

    Strivay, D.; Bastin, T.; Dehove, C.; Dumont, P. D.; Marchal, A.; Garnir, H.; Weber, G.

    1997-09-01

    We describe a fully automatic system we developed to fill, from an external gas bottle, the ion source terminal gas storage bottle of a 2 MV Van de Graaff accelerator without depressing the 25 bar insulating gas. The system is based on a programmable automate ordering electropneumatical valves. The only manual operation is the connection of the external gas cylinder. The time needed for a gas change is reduced to typically 15 min (depending on the residual pressure wished for the gas removed from the terminal bottle). To check this system we study the ionic composition of the ion beam delivered by our accelerator after different gas changes. The switching magnet of our accelerator was used to analyse the ionic composition of the accelerated beams in order to verify the degree of elimination of the previous gases in the system.

  11. Progress in protein crystallography.

    PubMed

    Dauter, Zbigniew; Wlodawer, Alexander

    2016-01-01

    Macromolecular crystallography evolved enormously from the pioneering days, when structures were solved by "wizards" performing all complicated procedures almost by hand. In the current situation crystal structures of large systems can be often solved very effectively by various powerful automatic programs in days or hours, or even minutes. Such progress is to a large extent coupled to the advances in many other fields, such as genetic engineering, computer technology, availability of synchrotron beam lines and many other techniques, creating the highly interdisciplinary science of macromolecular crystallography. Due to this unprecedented success crystallography is often treated as one of the analytical methods and practiced by researchers interested in structures of macromolecules, but not highly competent in the procedures involved in the process of structure determination. One should therefore take into account that the contemporary, highly automatic systems can produce results almost without human intervention, but the resulting structures must be carefully checked and validated before their release into the public domain.

  12. Results from Automated Cloud and Dust Devil Detection Onboard the MER

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Castano, Rebecca; Bornstein, Benjamin; Fukunaga, Alex; Castano, Andres; Biesiadecki, Jeffrey; Greeley, Ron; Whelley, Patrick; Lemmon, Mark

    2008-01-01

    We describe a new capability to automatically detect dust devils and clouds in imagery onboard rovers, enabling downlink of just the images with the targets or only portions of the images containing the targets. Previously, the MER rovers conducted campaigns to image dust devils and clouds by commanding a set of images be collected at fixed times and downloading the entire image set. By increasing the efficiency of the campaigns, more campaigns can be executed. Software for these new capabilities was developed, tested, integrated, uploaded, and operationally checked out on both rovers as part of the R9.2 software upgrade. In April 2007 on Sol 1147 a dust devil was automatically detected onboard the Spirit rover for the first time. We discuss the operational usage of the capability and present initial dust devil results showing how this preliminary application has demonstrated the feasibility and potential benefits of the approach.

  13. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  14. Automatic Welding of Stainless Steel Tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    To determine if the use of automatic welding would allow reduction of the radiographic inspection requirement, and thereby reduce fabrication costs, a series of welding tests were performed. In these tests an automatic welder was used on stainless steel tubing of 1/2, 3/4, and 1/2 inch diameter size. The optimum parameters were investigated to determine how much variation from optimum in machine settings could be tolerate and still result in a good quality weld. The process variables studied were the welding amperes, the revolutions per minute as a function of the circumferential weld travel speed, and the shielding gas flow. The investigation showed that the close control of process variables in conjunction with a thorough visual inspection of welds can be relied upon as an acceptable quality assurance procedure, thus permitting the radiographic inspection to be reduced by a large percentage when using the automatic process.

  15. Direction of the Rational Use of Water at the Facilities for Growing Poultry

    NASA Astrophysics Data System (ADS)

    Potseluev, A. A.; Nazarov, I. V.; Porotkova, A. K.; Volovikova, N. V.

    2018-01-01

    The article notes the effect of water use in the technological process of automatic drinking agricultural poultry on the quality and the quantity of outputs. At the same time, the requirements to the quality of the used water, the regimes of its consumption by the poultry and the role of mechanization of the process of automatic drinking in the rational use of the water resource, the processing and the reuse of contaminated wastes are disclosed. Within the framework of this concept, we propose constructively technological solutions of systems and means of automatic drinking agricultural poultry, providing the rational use of water as one of the important products of vital activity of agricultural poultry.

  16. HOLA: Human-like Orthogonal Network Layout.

    PubMed

    Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2016-01-01

    Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.

  17. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians' Patient Care Decisions.

    PubMed

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.

  18. Great SEP events and space weather: 1. Experience of automatically searching for event beginnings; probabilities of false and missed events

    NASA Astrophysics Data System (ADS)

    Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor

    It is well known that during great SEP events, fluxes of energetic particles can be so big that the memory of computers and other electronics in space may be destroyed, and satellites and spacecraft may cease to function. According to the NOAA Space Weather Prediction Cen-ter, the following scales constitute dangerous solar radiation storms: S5-extreme (flux level of particles with energy ∼ 10 MeV more than 105 ); S4 - severe(f luxmorethan104 ); andS3 - strong(f luxmorethan103 ). In these persiods, it is necessary to switch off some of the electronics for a few hours energy particles (meaning those with a few GeV/nucleon and higher), whose transportation to Earthfrom the S20 minutes after they accelerate and escape into the solar wind) than the main bulk of the smaller energy particle 60 minutes later). Here we describe the principles and experience of the automatic function of the "SEP - Search" program. The positive result, showing the exact beginning of an SEP event on the Emilio Segre Observ 10.8GV ), is determined now automatically by simultaneously increasing by 2.5 St.Dev. in two sections of the ne search "programnext uses 1-mindata for checking whether or not the observed increase reflects the beginning Research "automatically starts to work online. We determine also the probabilities of false and missed alerts.

  19. ABI Base Recall: Automatic Correction and Ends Trimming of DNA Sequences.

    PubMed

    Elyazghi, Zakaria; Yazouli, Loubna El; Sadki, Khalid; Radouani, Fouzia

    2017-12-01

    Automated DNA sequencers produce chromatogram files in ABI format. When viewing chromatograms, some ambiguities are shown at various sites along the DNA sequences, because the program implemented in the sequencing machine and used to call bases cannot always precisely determine the right nucleotide, especially when it is represented by either a broad peak or a set of overlaying peaks. In such cases, a letter other than A, C, G, or T is recorded, most commonly N. Thus, DNA sequencing chromatograms need manual examination: checking for mis-calls and truncating the sequence when errors become too frequent. The purpose of this paper is to develop a program allowing the automatic correction of these ambiguities. This application is a Web-based program powered by Shiny and runs under R platform for an easy exploitation. As a part of the interface, we added the automatic ends clipping option, alignment against reference sequences, and BLAST. To develop and test our tool, we collected several bacterial DNA sequences from different laboratories within Institut Pasteur du Maroc and performed both manual and automatic correction. The comparison between the two methods was carried out. As a result, we note that our program, ABI base recall, accomplishes good correction with a high accuracy. Indeed, it increases the rate of identity and coverage and minimizes the number of mismatches and gaps, hence it provides solution to sequencing ambiguities and saves biologists' time and labor.

  20. Great SEP events and space weather: 2. Automatic determination of the solar energetic particle spectrum

    NASA Astrophysics Data System (ADS)

    Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor

    In Applbaum et al. (2010) it was described how the "SEP-Search" program works automat-ically, determining on the basis of on-line one-minute NM data the beginning of a great SEP event. The "SEP-Search" next uses one-minute data in order to check whether or not the observed increase reflects the beginning of a real great SEP event. If yes, the program "SEP-Research/Spectrum" automatically starts to work on line. We consider two variants: 1) quiet period (no change in cut-off rigidity), 2) disturbed period (characterized with possible changing of cut-off rigidity). We describe the method of determining the spectrum of SEP in the 1st vari-ant (for this we need data for at least two components with different coupling functions). For the 2nd variant we need data for at least three components with different coupling functions. We show that for these purposes one can use data of the total intensity and some different mul-tiplicities, but that it is better to use data from two or three NM with different cut-off rigidities. We describe in detail the algorithms of the program "SEP-Research/Spectrum." We show how this program worked on examples of some historical great SEP events. The work of NM on Mt. Hermon is supported by Israel (Tel Aviv University and ISA) -Italian (UNIRoma-Tre and IFSI-CNR) collaboration.

  1. Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction

    PubMed Central

    Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong

    2010-01-01

    Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696

  2. WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, S; Kessler, M; Litzenberg, D

    2015-06-15

    Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those eventsmore » and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from Varian Medical Systems. Other quality projects involving her effort are funded by Blue Cross Blue Shield of Michigan, Breast Cancer Research Foundation, and the NIH.« less

  3. A procedure and program to calculate shuttle mask advantage

    NASA Astrophysics Data System (ADS)

    Balasinski, A.; Cetin, J.; Kahng, A.; Xu, X.

    2006-10-01

    A well-known recipe for reducing mask cost component in product development is to place non-redundant elements of layout databases related to multiple products on one reticle plate [1,2]. Such reticles are known as multi-product, multi-layer, or, in general, multi-IP masks. The composition of the mask set should minimize not only the layout placement cost, but also the cost of the manufacturing process, design flow setup, and product design and introduction to market. An important factor is the quality check which should be expeditious and enable thorough visual verification to avoid costly modifications once the data is transferred to the mask shop. In this work, in order to enable the layer placement and quality check procedure, we proposed an algorithm where mask layers are first lined up according to the price and field tone [3]. Then, depending on the product die size, expected fab throughput, and scribeline requirements, the subsequent product layers are placed on the masks with different grades. The actual reduction of this concept to practice allowed us to understand the tradeoffs between the automation of layer placement and setup related constraints. For example, the limited options of the numbers of layer per plate dictated by the die size and other design feedback, made us consider layer pairing based not only on the final price of the mask set, but also on the cost of mask design and fab-friendliness. We showed that it may be advantageous to introduce manual layer pairing to ensure that, e.g., all interconnect layers would be placed on the same plate, allowing for easy and simultaneous design fixes. Another enhancement was to allow some flexibility in mixing and matching of the layers such that non-critical ones requiring low mask grade would be placed in a less restrictive way, to reduce the count of orphan layers. In summary, we created a program to automatically propose and visualize shuttle mask architecture for design verification, with enhancements to due to the actual application of the code.

  4. Paediatric nurses' understanding of the process and procedure of double-checking medications.

    PubMed

    Dickinson, Annette; McCall, Elaine; Twomey, Bernadette; James, Natalie

    2010-03-01

    To understand paediatric nurses' understanding and practice regarding double-checking medication and identify facilitators and barriers to the process of independent double-checking (IDC). A system of double-checking medications has been proposed as a way of minimising medication error particularly in situations involving high-risk medications, complex processes such as calculating doses, or high-risk patient populations such as infants and children. While recommendations have been made in support of IDC in paediatric settings little is known about nursing practice and the facilitators and barriers to this process. A descriptive qualitative design was used. Data were collected via three focus group interviews. Six to seven paediatric nurses participated in homogenous groups based on level of practice. Data were analysed using thematic analysis. This study demonstrates that, while IDC is accepted and promoted as best practice in a paediatric setting, there is a lack of clarity as to what this means. This study supports other studies in relation to the influence of workload, distraction and environmental factors on the administration process but highlights the need for more research in relation to the impact of the power dynamic between junior and senior nurses. The issue of automaticity has been unexplored in relation to nursing practice but this study indicates that this may have an important influence on how care is delivered to patients. While the focus of this study was in the paediatric setting, the findings have relevance to other settings and population groups. The adoption of IDC in health care settings must have in place: policy and guidelines that clearly define the process of checking, educational support, an environment that supports peer critique and review, well-designed medication areas and accessible resources to support drug administration.

  5. Enhanced quality and quantity of retrieval of Critically Appraised Topics using the CAT Crawler.

    PubMed

    Dong, P; Mondry, A

    2004-03-01

    As healthcare moves towards the implementation of Evidence-Based Medicine (EBM), Critically Appraised Topics (CATs) become useful in helping physicians to make clinical decisions. A number of academic and healthcare organizations have set up web-based CAT libraries. The primary objective of the presented work is to provide a one-stop search and download site that allows access to multiple CAT libraries. A web-based application, namely the CAT Crawler, was developed to serve physicians with an adequate access to available appraised topics on the Internet. Important information is extracted automatically and regularly from CAT websites, and consolidated by checking the uniqueness and availability. The principle of meta-search is incorporated into the implementation of the search engine, which finds relevant topics following keyword input. The retrieved result directs the physician to the original resource page. A full-text article of a particular topic can be converted into a proper format for downloading to Personal Digital Assistant (PDA) devices. In summary, the application provides physicians with a common interface to retrieve relevant CATs on particular clinical topics from multiple resources, and thus speeds up the decision making process.

  6. Guideline validation in multiple trauma care through business process modeling.

    PubMed

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  7. Plan-Do-Check-Act and the Management of Institutional Research. AIR 1992 Annual Forum Paper.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.; Snyder, Julie K.

    This paper describes the application of a Total Quality Management strategy called Plan-Do-Check-Act (PDCA) to the projects and activities of an institutional research office at the Virginia Polytechnic Institute and State University. PDCA is a cycle designed to facilitate incremental continual improvement through change. The specific steps are…

  8. Agricultural Baseline (BL0) scenario

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinckel, Chad M [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  9. Simple colonoscopy reporting system checking the detection rate of colon polyps.

    PubMed

    Kim, Jae Hyun; Choi, Youn Jung; Kwon, Hye Jung; Park, Seun Ja; Park, Moo In; Moon, Won; Kim, Sung Eun

    2015-08-21

    To present a simple colonoscopy reporting system that can be checked easily the detection rate of colon polyps. A simple colonoscopy reporting system Kosin Gastroenterology (KG quality reporting system) was developed. The polyp detection rate (PDR), adenoma detection rate (ADR), serrated polyp detection rate (SDR), and advanced adenoma detection rate (AADR) are easily calculated to use this system. In our gastroenterology center, the PDR, ADR, SDR, and AADR test results from each gastroenterologist were updated, every month. Between June 2014, when the program was started, and December 2014, the overall PDR and ADR in our center were 62.5% and 41.4%, respectively. And the overall SDR and AADR were 7.5% and 12.1%, respectively. We envision that KG quality reporting system can be applied to develop a comprehensive system to check colon polyp detection rates in other gastroenterology centers.

  10. The Modern Measurement Technology And Checking Of Shafs Parameters

    NASA Astrophysics Data System (ADS)

    Tichá, Šárka; Botek, Jan

    2015-12-01

    This paper is focused on rationalization checking parameters of shaft in companies engaged in the production of components of electric motors, wind turbines and vacuum systems. Customers increasing constantly their requirements to ensure the overall quality of the product, i.e. the quality of machining, dimensional and shape accuracy and overall purity of the subscribed products. The aim of this paper is to introduce using modern measurement technology in controlling these components and compare the results with existing control methodology. The main objective of this rationalization is to eliminate mistakes and shortcomings of current inspection methods.

  11. Automatic detection and visualisation of MEG ripple oscillations in epilepsy.

    PubMed

    van Klink, Nicole; van Rosmalen, Frank; Nenonen, Jukka; Burnos, Sergey; Helle, Liisa; Taulu, Samu; Furlong, Paul Lawrence; Zijlmans, Maeike; Hillebrand, Arjan

    2017-01-01

    High frequency oscillations (HFOs, 80-500 Hz) in invasive EEG are a biomarker for the epileptic focus. Ripples (80-250 Hz) have also been identified in non-invasive MEG, yet detection is impeded by noise, their low occurrence rates, and the workload of visual analysis. We propose a method that identifies ripples in MEG through noise reduction, beamforming and automatic detection with minimal user effort. We analysed 15 min of presurgical resting-state interictal MEG data of 25 patients with epilepsy. The MEG signal-to-noise was improved by using a cross-validation signal space separation method, and by calculating ~ 2400 beamformer-based virtual sensors in the grey matter. Ripples in these sensors were automatically detected by an algorithm optimized for MEG. A small subset of the identified ripples was visually checked. Ripple locations were compared with MEG spike dipole locations and the resection area if available. Running the automatic detection algorithm resulted in on average 905 ripples per patient, of which on average 148 ripples were visually reviewed. Reviewing took approximately 5 min per patient, and identified ripples in 16 out of 25 patients. In 14 patients the ripple locations showed good or moderate concordance with the MEG spikes. For six out of eight patients who had surgery, the ripple locations showed concordance with the resection area: 4/5 with good outcome and 2/3 with poor outcome. Automatic ripple detection in beamformer-based virtual sensors is a feasible non-invasive tool for the identification of ripples in MEG. Our method requires minimal user effort and is easily applicable in a clinical setting.

  12. PHASEGO: A toolkit for automatic calculation and plot of phase diagram

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li

    2015-06-01

    The PHASEGO package extracts the Helmholtz free energy from the phonon density of states obtained by the first-principles calculations. With the help of equation of states fitting, it reduces the Gibbs free energy as a function of pressure/temperature at fixed temperature/pressure. Based on the quasi-harmonic approximation (QHA), it calculates the possible phase boundaries among all the structures of interest and finally plots the phase diagram automatically. For the single phase analysis, PHASEGO can numerically derive many properties, such as the thermal expansion coefficients, the bulk moduli, the heat capacities, the thermal pressures, the Hugoniot pressure-volume-temperature relations, the Grüneisen parameters, and the Debye temperatures. In order to check its ability of phase transition analysis, I present here two examples: semiconductor GaN and metallic Fe. In the case of GaN, PHASEGO automatically determined and plotted the phase boundaries among the provided zinc blende (ZB), wurtzite (WZ) and rocksalt (RS) structures. In the case of Fe, the results indicate that at high temperature the electronic thermal excitation free energy corrections considerably alter the phase boundaries among the body-centered cubic (bcc), face-centered cubic (fcc) and hexagonal close-packed (hcp) structures.

  13. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography.

    PubMed

    Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.

  14. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  15. Data Quality Control for Vessel Mounted Acoustic Doppler Current Profiler. Application for the Western Mediterranean Sea

    NASA Technical Reports Server (NTRS)

    Garcia-Gorriz, E.; Front, J.; Candela, J.

    1997-01-01

    A systematic Data Quality Checking Protocol for vessel Mounted Acoustic Doppler Current Profiler observations is proposed. Previous-to-acquisition conditions are considered along with simultaneous ones.

  16. Evaluation of non-destructive technologies for construction quality control of HMA and PCC pavements in Louisiana : [research project capsule].

    DOT National Transportation Integrated Search

    2009-07-01

    Current roadway quality control and quality acceptance (QC/QA) procedures : for Louisiana include coring for thickness, density, and air void checks in hot : mix asphalt (HMA) pavements and thickness and compressive strength for : Portland cement con...

  17. Visplause: Visual Data Quality Assessment of Many Time Series Using Plausibility Checks.

    PubMed

    Arbesser, Clemens; Spechtenhauser, Florian; Muhlbacher, Thomas; Piringer, Harald

    2017-01-01

    Trends like decentralized energy production lead to an exploding number of time series from sensors and other sources that need to be assessed regarding their data quality (DQ). While the identification of DQ problems for such routinely collected data is typically based on existing automated plausibility checks, an efficient inspection and validation of check results for hundreds or thousands of time series is challenging. The main contribution of this paper is the validated design of Visplause, a system to support an efficient inspection of DQ problems for many time series. The key idea of Visplause is to utilize meta-information concerning the semantics of both the time series and the plausibility checks for structuring and summarizing results of DQ checks in a flexible way. Linked views enable users to inspect anomalies in detail and to generate hypotheses about possible causes. The design of Visplause was guided by goals derived from a comprehensive task analysis with domain experts in the energy sector. We reflect on the design process by discussing design decisions at four stages and we identify lessons learned. We also report feedback from domain experts after using Visplause for a period of one month. This feedback suggests significant efficiency gains for DQ assessment, increased confidence in the DQ, and the applicability of Visplause to summarize indicators also outside the context of DQ.

  18. Formal Verification Toolkit for Requirements and Early Design Stages

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Miller, Sheena Judson

    2011-01-01

    Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.

  19. Users manual for AUTOMESH-2D: A program of automatic mesh generation for two-dimensional scattering analysis by the finite element method

    NASA Technical Reports Server (NTRS)

    Hua, Chongyu; Volakis, John L.

    1990-01-01

    AUTOMESH-2D is a computer program specifically designed as a preprocessor for the scattering analysis of two dimensional bodies by the finite element method. This program was developed due to a need for reproducing the effort required to define and check the geometry data, element topology, and material properties. There are six modules in the program: (1) Parameter Specification; (2) Data Input; (3) Node Generation; (4) Element Generation; (5) Mesh Smoothing; and (5) Data File Generation.

  20. Automatic sequencing and control of Space Station airlock operations

    NASA Technical Reports Server (NTRS)

    Himel, Victor; Abeles, Fred J.; Auman, James; Tqi, Terry O.

    1989-01-01

    Procedures that have been developed as part of the NASA JSC-sponsored pre-prototype Checkout, Servicing and Maintenance (COSM) program for pre- and post-EVA airlock operations are described. This paper addresses the accompanying pressure changes in the airlock and in the Advanced Extravehicular Mobility Unit (EMU). Additionally, the paper focuses on the components that are checked out, and includes the step-by-step sequences to be followed by the crew, the required screen displays and prompts that accompany each step, and a description of the automated processes that occur.

  1. An Automated Web Diary System for TeleHomeCare Patient Monitoring

    PubMed Central

    Ganzinger, Matthias; Demiris, George; Finkelstein, Stanley M.; Speedie, Stuart; Lundgren, Jan Marie

    2001-01-01

    The TeleHomeCare project monitors home care patients via the Internet. Each patient has a personalized homepage with an electronic diary for collecting the monitoring data with HTML forms. The web pages are generated dynamically using PHP. All data are stored in a MySQL database. Data are checked immediately by the system; if a value exceeds a predefined limit an alarm message is generated and sent automatically to the patient's case manager. Weekly graphical reports (PDF format) are also generated and sent by email to the same destination.

  2. Development of advanced avionics systems applicable to terminal-configured vehicles

    NASA Technical Reports Server (NTRS)

    Heimbold, R. L.; Lee, H. P.; Leffler, M. F.

    1980-01-01

    A technique to add the time constraint to the automatic descent feature of the existing L-1011 aircraft Flight Management System (FMS) was developed. Software modifications were incorporated in the FMS computer program and the results checked by lab simulation and on a series of eleven test flights. An arrival time dispersion (2 sigma) of 19 seconds was achieved. The 4 D descent technique can be integrated with the time-based metering method of air traffic control. Substantial reductions in delays at today's busy airports should result.

  3. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  4. Automatic Control of Silicon Melt Level

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Stickel, W. B.

    1982-01-01

    A new circuit, when combined with melt-replenishment system and melt level sensor, offers continuous closed-loop automatic control of melt-level during web growth. Installed on silicon-web furnace, circuit controls melt-level to within 0.1 mm for as long as 8 hours. Circuit affords greater area growth rate and higher web quality, automatic melt-level control also allows semiautomatic growth of web over long periods which can greatly reduce costs.

  5. Topological Relations-Based Detection of Spatial Inconsistency in GLOBELAND30

    NASA Astrophysics Data System (ADS)

    Kang, S.; Chen, J.; Peng, S.

    2017-09-01

    Land cover is one of the fundamental data sets on environment assessment, land management and biodiversity protection, etc. Hence, data quality control of land cover is extremely critical for geospatial analysis and decision making. Due to the similar remote-sensing reflectance for some land cover types, omission and commission errors occurred in preliminary classification could result to spatial inconsistency between land cover types. In the progress of post-classification, this error checking mainly depends on manual labour to assure data quality, by which it is time-consuming and labour intensive. So a method required for automatic detection in post-classification is still an open issue. From logical inconsistency point of view, an inconsistency detection method is designed. This method consist of a grids extended 4-intersection model (GE4IM) for topological representation in single-valued space, by which three different kinds of topological relations including disjoint, touch, contain or contained-by are described, and an algorithm of region overlay for the computation of spatial inconsistency. The rules are derived from universal law in nature between water body and wetland, cultivated land and artificial surface. Through experiment conducted in Shandong Linqu County, data inconsistency can be pointed out within 6 minutes through calculation of topological inconsistency between cultivated land and artificial surface, water body and wetland. The efficiency evaluation of the presented algorithm is demonstrated by Google Earth images. Through comparative analysis, the algorithm is proved to be promising for inconsistency detection in land cover data.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guo Qiang; Luo, Lingyun; Ogbuji, Chime

    The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions. MOCH represents patterns of multitype interaction as small labeled sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology (OWL, RDF and SPARQL) andmore » Virtuoso, we performed exhaustive analyses of three 2-node motifs, resulting in 638 matching FMA configurations; twelve 3-node motifs, resulting in 202,960 configurations. Using the Principal Ideal Explorer (PIE) methodology as an extension of MOCH, we were able to identify 755 root nodes with 4,100 respective descendants with opposing antonyms in their class names for arbitrary-length motifs. With possible disjointness implied by antonyms, we performed manual inspection of a subset of the resulting FMA fragments and tracked down a source of abnormal inferred conclusions (captured by the motifs), coming from a gender-neutral class being modeled as a part of gender-specific class, such as “Urinary system” is a part of “Female human body.” Our results demonstrate that MOCH and PIE provide a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.« less

  7. Current status of verification practices in clinical biochemistry in Spain.

    PubMed

    Gómez-Rioja, Rubén; Alvarez, Virtudes; Ventura, Montserrat; Alsina, M Jesús; Barba, Núria; Cortés, Mariano; Llopis, María Antonia; Martínez, Cecilia; Ibarz, Mercè

    2013-09-01

    Verification uses logical algorithms to detect potential errors before laboratory results are released to the clinician. Even though verification is one of the main processes in all laboratories, there is a lack of standardization mainly in the algorithms used and the criteria and verification limits applied. A survey in clinical laboratories in Spain was conducted in order to assess the verification process, particularly the use of autoverification. Questionnaires were sent to the laboratories involved in the External Quality Assurance Program organized by the Spanish Society of Clinical Biochemistry and Molecular Pathology. Seven common biochemical parameters were included (glucose, cholesterol, triglycerides, creatinine, potassium, calcium, and alanine aminotransferase). Completed questionnaires were received from 85 laboratories. Nearly all the laboratories reported using the following seven verification criteria: internal quality control, instrument warnings, sample deterioration, reference limits, clinical data, concordance between parameters, and verification of results. The use of all verification criteria varied according to the type of verification (automatic, technical, or medical). Verification limits for these parameters are similar to biological reference ranges. Delta Check was used in 24% of laboratories. Most laboratories (64%) reported using autoverification systems. Autoverification use was related to laboratory size, ownership, and type of laboratory information system, but amount of use (percentage of test autoverified) was not related to laboratory size. A total of 36% of Spanish laboratories do not use autoverification, despite the general implementation of laboratory information systems, most of them, with autoverification ability. Criteria and rules for seven routine biochemical tests were obtained.

  8. Improving the automated optimization of profile extrusion dies by applying appropriate optimization areas and strategies

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.

    2014-05-01

    The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.

  9. Effects of developer depletion on image quality of Kodak Insight and Ektaspeed Plus films.

    PubMed

    Casanova, M S; Casanova, M L S; Haiter-Neto, F

    2004-03-01

    To evaluate the effect of processing solution depletion on the image quality of F-speed dental X-ray film (Insight), compared with Ektaspeed Plus. The films were exposed with a phantom and developed in manual and automatic conditions, in fresh and progressively depleted solutions. The comparison was based on densitometric analysis and subjective appraisal. The processing solution depletion presented a different behaviour depending on whether manual or automatic technique was used. The films were distinctly affected by depleted processing solutions. The developer depletion was faster in automatic than manual conditions. Insight film was more resistant than Ektaspeed Plus to the effects of processing solution depletion. In the present study there was agreement between the objective and subjective appraisals.

  10. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  11. SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueda, Y; Hirose, A; Oohira, S

    Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie.more » The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)« less

  12. 46 CFR 160.132-9 - Preapproval review.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.132-19 and 160.132-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR subpart 159.010. (d) Plan quality. All...

  13. 46 CFR 160.132-9 - Preapproval review.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.132-19 and 160.132-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR subpart 159.010. (d) Plan quality. All...

  14. 46 CFR 160.132-9 - Preapproval review.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.132-19 and 160.132-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR subpart 159.010. (d) Plan quality. All...

  15. 40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... activities including, as applicable, calibration checks and required zero and span adjustments. A monitoring... monitoring system quality assurance or control activities in calculations used to report emissions or...-control periods, and required monitoring system quality assurance or quality control activities including...

  16. 40 CFR 60.2735 - Is there a minimum amount of monitoring data I must obtain?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... activities including, as applicable, calibration checks and required zero and span adjustments. A monitoring... monitoring system quality assurance or control activities in calculations used to report emissions or...-control periods, and required monitoring system quality assurance or quality control activities including...

  17. A School-Based Quality Improvement Program.

    ERIC Educational Resources Information Center

    Rappaport, Lewis A.

    1993-01-01

    As one Brooklyn high school discovered, quality improvement begins with administrator commitment and participants' immersion in the literature. Other key elements include ongoing training of personnel involved in the quality-improvement process, tools such as the Deming Cycle (plan-do-check-act), voluntary and goal-oriented teamwork, and a worthy…

  18. Assessing Educational Processes Using Total-Quality-Management Measurement Tools.

    ERIC Educational Resources Information Center

    Macchia, Peter, Jr.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) assessment tools in educational settings highlights and gives examples of fishbone diagrams, or cause and effect charts; Pareto diagrams; control charts; histograms and check sheets; scatter diagrams; and flowcharts. Variation and quality are discussed in terms of continuous process…

  19. Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    1982-01-01

    Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)

  20. Analysis of results obtained using the automatic chemical control of the quality of the water heat carrier in the drum boiler of the Ivanovo CHP-3 power plant

    NASA Astrophysics Data System (ADS)

    Larin, A. B.; Kolegov, A. V.

    2012-10-01

    Results of industrial tests of the new method used for the automatic chemical control of the quality of boiler water of the drum-type power boiler ( P d = 13.8 MPa) are described. The possibility of using an H-cationite column for measuring the electric conductivity of an H-cationized sample of boiler water over a long period of time is shown.

  1. Computer vision system: a tool for evaluating the quality of wheat in a grain tank

    NASA Astrophysics Data System (ADS)

    Minkin, Uryi Igorevish; Panchenko, Aleksei Vladimirovich; Shkanaev, Aleksandr Yurievich; Konovalenko, Ivan Andreevich; Putintsev, Dmitry Nikolaevich; Sadekov, Rinat Nailevish

    2018-04-01

    The paper describes a technology that allows for automatizing the process of evaluating the grain quality in a grain tank of a combine harvester. Special recognition algorithm analyzes photographic images taken by the camera, and that provides automatic estimates of the total mass fraction of broken grains and the presence of non-grains. The paper also presents the operating details of the tank prototype as well as it defines the accuracy of the algorithms designed.

  2. Detection technology research on the one-way clutch of automatic brake adjuster

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Luo, Zai; Lu, Yi

    2013-10-01

    In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.

  3. Open access tools for quality-assured and efficient data entry in a large, state-wide tobacco survey in India.

    PubMed

    Shewade, Hemant Deepak; Vidhubala, E; Subramani, Divyaraj Prabhakar; Lal, Pranay; Bhatt, Neelam; Sundaramoorthi, C; Singh, Rana J; Kumar, Ajay M V

    2017-01-01

    A large state-wide tobacco survey was conducted using modified version of pretested, globally validated Global Adult Tobacco Survey (GATS) questionnaire in 2015-22016 in Tamil Nadu, India. Due to resource constrains, data collection was carrid out using paper-based questionnaires (unlike the GATS-India, 2009-2010, which used hand-held computer devices) while data entry was done using open access tools. The objective of this paper is to describe the process of data entry and assess its quality assurance and efficiency. In EpiData language, a variable is referred to as 'field' and a questionnaire (set of fields) as 'record'. EpiData software was used for double data entry with adequate checks followed by validation. Teamviewer was used for remote training and trouble shooting. The EpiData databases (one each for each district and each zone in Chennai city) were housed in shared Dropbox folders, which enabled secure sharing of files and automatic back-up. Each database for a district/zone had separate file for data entry of household level and individual level questionnaire. Of 32,945 households, there were 111,363 individuals aged ≥15 years. The average proportion of records with data entry errors for a district/zone in household level and individual level file was 4% and 24%, respectively. These are the errors that would have gone unnoticed if single entry was used. The median (inter-quartile range) time taken for double data entry for a single household level and individual level questionnaire was 30 (24, 40) s and 86 (64, 126) s, respectively. Efficient and quality-assured near-real-time data entry in a large sub-national tobacco survey was performed using innovative, resource-efficient use of open access tools.

  4. ROBO-AO M DWARF MULTIPLICITY SURVEY

    NASA Astrophysics Data System (ADS)

    Lamman, Claire; Berta-Thompson, Zachory; Baranec, Christoph; Law, Nicholas; Schonhut, Jessica

    2018-01-01

    We analyzed over 7,000 observations from Robo-AO’s field M dwarf survey taken on the 2.1m Kitt Peak telescope. Results will help determine the multiplicity fraction of M dwarfs as a function of primary mass, which is a crucial step towards understanding their evolution and formation mechanics. Through its robotic, laser-guided, and automated system, the Robo-AO instrument has yielded the largest adaptive-optics M dwarf survey to date. I developed a graphical user interface to quickly analyze this data. Initial data analysis included assessing data quality, checking the result from Robo-AO’s automatic reduction pipeline, and determining existence as well as the relative position of companions through a visual inspection. This program can be applied to other datasets and was successfully tested by re-analyzing observations from a separate Robo-AO survey. Following the preliminary results from this data analysis tool, further observations were done with the Keck II telescope by using its NIRC2 imager to follow up on ten select targets for the existence and physical association of companions. After a conservative initial cut for quality, 356 companions were found within 4” of a primary star out of 2,746 high quality Robo-AO M dwarf observations, including four triple systems. We will present a preliminary estimate for the multiplicity rate of wide M dwarf companions after accounting for observation limitations and the completeness of our search. Future research will yield insights into low-mass stellar formation and provide a database of nearby M dwarf multiples that will potentially assist ongoing and future surveys for planets around these stars, such as the NASA TESS mission.

  5. PET-Tool: a software suite for comprehensive processing and managing of Paired-End diTag (PET) sequence data.

    PubMed

    Chiu, Kuo Ping; Wong, Chee-Hong; Chen, Qiongyu; Ariyaratne, Pramila; Ooi, Hong Sain; Wei, Chia-Lin; Sung, Wing-Kin Ken; Ruan, Yijun

    2006-08-25

    We recently developed the Paired End diTag (PET) strategy for efficient characterization of mammalian transcriptomes and genomes. The paired end nature of short PET sequences derived from long DNA fragments raised a new set of bioinformatics challenges, including how to extract PETs from raw sequence reads, and correctly yet efficiently map PETs to reference genome sequences. To accommodate and streamline data analysis of the large volume PET sequences generated from each PET experiment, an automated PET data process pipeline is desirable. We designed an integrated computation program package, PET-Tool, to automatically process PET sequences and map them to the genome sequences. The Tool was implemented as a web-based application composed of four modules: the Extractor module for PET extraction; the Examiner module for analytic evaluation of PET sequence quality; the Mapper module for locating PET sequences in the genome sequences; and the Project Manager module for data organization. The performance of PET-Tool was evaluated through the analyses of 2.7 million PET sequences. It was demonstrated that PET-Tool is accurate and efficient in extracting PET sequences and removing artifacts from large volume dataset. Using optimized mapping criteria, over 70% of quality PET sequences were mapped specifically to the genome sequences. With a 2.4 GHz LINUX machine, it takes approximately six hours to process one million PETs from extraction to mapping. The speed, accuracy, and comprehensiveness have proved that PET-Tool is an important and useful component in PET experiments, and can be extended to accommodate other related analyses of paired-end sequences. The Tool also provides user-friendly functions for data quality check and system for multi-layer data management.

  6. Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes.

    PubMed

    Abràmoff, Michael D; Niemeijer, Meindert; Suttorp-Schulten, Maria S A; Viergever, Max A; Russell, Stephen R; van Ginneken, Bram

    2008-02-01

    To evaluate the performance of a system for automated detection of diabetic retinopathy in digital retinal photographs, built from published algorithms, in a large, representative, screening population. We conducted a retrospective analysis of 10,000 consecutive patient visits, specifically exams (four retinal photographs, two left and two right) from 5,692 unique patients from the EyeCheck diabetic retinopathy screening project imaged with three types of cameras at 10 centers. Inclusion criteria included no previous diagnosis of diabetic retinopathy, no previous visit to ophthalmologist for dilated eye exam, and both eyes photographed. One of three retinal specialists evaluated each exam as unacceptable quality, no referable retinopathy, or referable retinopathy. We then selected exams with sufficient image quality and determined presence or absence of referable retinopathy. Outcome measures included area under the receiver operating characteristic curve (number needed to miss one case [NNM]) and type of false negative. Total area under the receiver operating characteristic curve was 0.84, and NNM was 80 at a sensitivity of 0.84 and a specificity of 0.64. At this point, 7,689 of 10,000 exams had sufficient image quality, 4,648 of 7,689 (60%) were true negatives, 59 of 7,689 (0.8%) were false negatives, 319 of 7,689 (4%) were true positives, and 2,581 of 7,689 (33%) were false positives. Twenty-seven percent of false negatives contained large hemorrhages and/or neovascularizations. Automated detection of diabetic retinopathy using published algorithms cannot yet be recommended for clinical practice. However, performance is such that evaluation on validated, publicly available datasets should be pursued. If algorithms can be improved, such a system may in the future lead to improved prevention of blindness and vision loss in patients with diabetes.

  7. The Wolfgang and Amadeus Automatic Photoelectric Telescopes. A ``Kleine-Nacht-Musik'' during the first five years of routine operation

    NASA Astrophysics Data System (ADS)

    Granzer, T.; Reegen, P.; Strassmeier, K. G.

    2001-12-01

    We present a summary of five years of continuous operation of the University of Vienna twin Automatic Photoelectric Telescopes (APTs) -- Wolfgang and Amadeus. These two telescopes are part of the Fairborn Observatory facility located in the Sonoran desert close to Washington Camp in southern Arizona. The detection and distinction procedure between weather-induced data-quality loss and systematic data-quality loss turned out to be a crucial task. Therefore, special emphasis is laid on the data quality monitoring tools developed throughout the years. Furthermore, we summarize the scientific highlights from the first five years of operation

  8. Class Model Development Using Business Rules

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Gudas, Saulius

    New developments in the area of computer-aided system engineering (CASE) greatly improve processes of the information systems development life cycle (ISDLC). Much effort is put into the quality improvement issues, but IS development projects still suffer from the poor quality of models during the system analysis and design cycles. At some degree, quality of models that are developed using CASE tools can be assured using various. automated. model comparison, syntax. checking procedures. It. is also reasonable to check these models against the business domain knowledge, but the domain knowledge stored in the repository of CASE tool (enterprise model) is insufficient (Gudas et al. 2004). Involvement of business domain experts into these processes is complicated because non- IT people often find it difficult to understand models that were developed by IT professionals using some specific modeling language.

  9. Research progress of on-line automatic monitoring of chemical oxygen demand (COD) of water

    NASA Astrophysics Data System (ADS)

    Cai, Youfa; Fu, Xing; Gao, Xiaolu; Li, Lianyin

    2018-02-01

    With the increasingly stricter control of pollutant emission in China, the on-line automatic monitoring of water quality is particularly urgent. The chemical oxygen demand (COD) is a comprehensive index to measure the contamination caused by organic matters, and thus it is taken as one important index of energy-saving and emission reduction in China’s “Twelve-Five” program. So far, the COD on-line automatic monitoring instrument has played an important role in the field of sewage monitoring. This paper reviews the existing methods to achieve on-line automatic monitoring of COD, and on the basis, points out the future trend of the COD on-line automatic monitoring instruments.

  10. Assistive technology to help persons in a minimally conscious state develop responding and stimulation control: Performance assessment and social rating.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; D'Amico, Fiora; Buonocunto, Francesca; Navarro, Jorge; Lanzilotti, Crocifissa; Fiore, Piero; Megna, Marisa; Damiani, Sabino

    2015-01-01

    Post-coma persons in a minimally conscious state (MCS) and with extensive motor impairment and lack of speech tend to be passive and isolated. This study aimed to (a) further assess a technology-aided approach for fostering MCS participants' responding and stimulation control and (b) carry out a social validation check about the approach. Eight MCS participants were exposed to the aforementioned approach according to an ABAB design. The technology included optic, pressure or touch microswitches to monitor eyelid, hand or finger responses and a computer system that allowed those responses to produce brief periods of positive stimulation during the B (intervention) phases of the study. Eighty-four university psychology students and 42 care and health professionals were involved in the social validation check. The MCS participants showed clear increases in their response frequencies, thus producing increases in their levels of environmental stimulation input, during the B phases of the study. The students and care and health professionals involved in the social validation check rated the technology-aided approach more positively than a control condition in which stimulation was automatically presented to the participants. A technology-aided approach to foster responding and stimulation control in MCS persons may be effective and socially desirable.

  11. Data quality in a DRG-based information system.

    PubMed

    Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y

    1994-09-01

    The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. SU-F-T-238: Analyzing the Performance of MapCHECK2 and Delta4 Quality Assurance Phantoms in IMRT and VMAT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, SH; Tsai, YC; Lan, HT

    2016-06-15

    Purpose: Intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) have been widely investigated for use in radiotherapy and found to have a highly conformal dose distribution. Delta{sup 4} is a novel cylindrical phantom consisting of 1069 p-type diodes with true treatments measured in the 3D target volume. The goal of this study was to compare the performance of a Delta{sup 4} diode array for IMRT and VMAT planning with ion chamber and MapCHECK2. Methods: Fifty-four IMRT (n=9) and VMAT (n=45) plans were imported to Philips Pinnacle Planning System 9.2 for recalculation with a solid water phantom, MapCHECK2, and themore » Delta4 phantom. To evaluate the difference between the measured and calculated dose, we used MapCHECK2 and Delta{sup 4} for a dose-map comparison and an ion chamber (PTW 31010 Semiflex 0.125 cc) for a point-dose comparison. Results: All 54 plans met the criteria of <3% difference for the point dose (at least two points) by ion chamber. The mean difference was 0.784% with a standard deviation of 1.962%. With a criteria of 3 mm/3% in a gamma analysis, the average passing rates were 96.86%±2.19% and 98.42%±1.97% for MapCHECK2 and Delta{sup 4}, respectively. The student t-test of MapCHECK2/Delta{sup 4}, ion chamber/Delta{sup 4}, and ion chamber/MapCHECK2 were 0.0008, 0.2944, and 0.0002, respectively. There was no significant difference in passing rates between MapCHECK2 and Delta{sup 4} for the IMRT plan (p = 0.25). However, a higher pass rate was observed in Delta{sup 4} (98.36%) as compared to MapCHECK2 (96.64%, p < 0.0001) for the VMAT plan. Conclusion: The Pinnacle planning system can accurately calculate doses for VMAT and IMRT plans. The Delta{sup 4} shows a similar result when compared to ion chamber and MapCHECK2, and is an efficient tool for patient-specific quality assurance, especially for rotation therapy.« less

  13. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  14. Data Quality Control of the French Permanent Broadband Network in the RESIF Framework.

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Lambotte, S.; Engels, F.

    2014-12-01

    In the framework of the RESIF (Réseau Sismologique et géodésique Français) project, a new information system is setting up, allowing the improvement of the management and the distribution of high quality data from the different elements of RESIF. Within this information system, EOST (in Strasbourg) is in charge of collecting real-time permanent broadband seismic waveform, and performing Quality Control on these data. The real-time and validated data set are pushed to the French National Distribution Center (Isterre/Grenoble) to make them publicly available. Furthermore EOST hosts the BCSF-ReNaSS, in charge of the French metropolitan seismic bulletin. This allows to benefit from some high-end quality control based on the national and world-wide seismicity. Here we present the real-time seismic data flow from the stations of the French National Broad Band Network to EOST, and then, the data Quality Control procedures that were recently installed, including some new developments.The data Quality Control consists in applying a variety of processes to check the consistency of the whole system from the stations to the data center. This allows us to verify that instruments and data transmission are operating correctly. Moreover, time quality is critical for most of the scientific data applications. To face this challenge and check the consistency of polarities and amplitudes, we deployed several high-end processes including a noise correlation procedure to check for timing accuracy (intrumental time errors result in a time-shift of the whole cross-correlation, clearly distinct from those due to change in medium physical properties), and a systematic comparison of synthetic and real data for teleseismic earthquakes of magnitude larger than 6.5 to detect timing errors as well as polarity and amplitude problems.

  15. Standard Reference Specimens in Quality Control of Engineering Surfaces

    PubMed Central

    Song, J. F.; Vorburger, T. V.

    1991-01-01

    In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115

  16. [Evaluation of the quality of clinical practice guidelines published in the Annales de Biologie Clinique with the help of the EFLM checklist].

    PubMed

    Wils, Julien; Fonfrède, Michèle; Augereau, Christine; Watine, Joseph

    2014-01-01

    Several tools are available to help evaluate the quality of clinical practice guidelines (CPG). The AGREE instrument (Appraisal of guidelines for research & evaluation) is the most consensual tool but it has been designed to assess CPG methodology only. The European federation of laboratory medicine (EFLM) recently designed a check-list dedicated to laboratory medicine which is supposed to be comprehensive and which therefore makes it possible to evaluate more thoroughly the quality of CPG in laboratory medicine. In the present work we test the comprehensiveness of this check-list on a sample of CPG written in French and published in Annales de biologie clinique (ABC). Thus we show that some work remains to be achieved before a truly comprehensive check-list is designed. We also show that there is some room for improvement for the CPG published in ABC, for example regarding the fact that some of these CPG do not provide any information about allowed durations of transport and of storage of biological samples before analysis, or about standards of minimal analytical performance, or about the sensitivities or the specificities of the recommended tests.

  17. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  18. Driving photomask supplier quality through automation

    NASA Astrophysics Data System (ADS)

    Russell, Drew; Espenscheid, Andrew

    2007-10-01

    In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.

  19. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    PubMed

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  20. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians’ Patient Care Decisions

    PubMed Central

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867

  1. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  2. Experimental Evaluation of a Planning Language Suitable for Formal Verification

    NASA Technical Reports Server (NTRS)

    Butler, Rick W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2008-01-01

    The marriage of model checking and planning faces two seemingly diverging alternatives: the need for a planning language expressive enough to capture the complexity of real-life applications, as opposed to a language simple, yet robust enough to be amenable to exhaustive verification and validation techniques. In an attempt to reconcile these differences, we have designed an abstract plan description language, ANMLite, inspired from the Action Notation Modeling Language (ANML) [17]. We present the basic concepts of the ANMLite language as well as an automatic translator from ANMLite to the model checker SAL (Symbolic Analysis Laboratory) [7]. We discuss various aspects of specifying a plan in terms of constraints and explore the implications of choosing a robust logic behind the specification of constraints, rather than simply propose a new planning language. Additionally, we provide an initial assessment of the efficiency of model checking to search for solutions of planning problems. To this end, we design a basic test benchmark and study the scalability of the generated SAL models in terms of plan complexity.

  3. An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm

    NASA Astrophysics Data System (ADS)

    Heaton, T. H.; Karakus, G.; Beck, J. L.

    2016-12-01

    Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.

  4. Analyzing and Detecting Problems in Systems of Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Ackermann, Christopher; Stratton, William C.; Sibol, Deane E.; Godfrey, Sally

    2008-01-01

    Many software systems are evolving complex system of systems (SoS) for which inter-system communication is mission-critical. Evidence indicates that transmission failures and performance issues are not uncommon occurrences. In a NASA-supported Software Assurance Research Program (SARP) project, we are researching a new approach addressing such problems. In this paper, we are presenting an approach for analyzing inter-system communications with the goal to uncover both transmission errors and performance problems. Our approach consists of a visualization and an evaluation component. While the visualization of the observed communication aims to facilitate understanding, the evaluation component automatically checks the conformance of an observed communication (actual) to a desired one (planned). The actual and the planned are represented as sequence diagrams. The evaluation algorithm checks the conformance of the actual to the planned diagram. We have applied our approach to the communication of aerospace systems and were successful in detecting and resolving even subtle and long existing transmission problems.

  5. SU-F-T-165: Daily QA Analysis for Spot Scanning Beamline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poenisch, F; Gillin, M; Sahoo, N

    2016-06-15

    Purpose: The dosimetric results of our daily quality assurance over the last 8 years for discrete pencil beam scanning proton therapy will be presented. Methods: To perform the dosimetric checks, a multi-ion chamber detector is used, which consists of an array of 5 single parallel plate ion chambers that are aligned as a cross separated by 10cm each. The Tracker is snapped into a jig, which is placed on the tabletop. Different amounts of Solid Water buildup are added to shift the dose distribution. The dosimetric checks consist of 3 parts: position check, range check and volume dose check. Results:more » The average deviation of all position-check data were 0.2±1.3%. For the range check, the average deviation was 0.1%±1.2%, which also corresponds to a range stability of better than 1 mm over all measurements. The volumetric dose output readings were all within ±1% with the exception of 2 occasions when the cable to the dose monitor was being repaired. Conclusion: Morning QA using the Tracker device gives very stable dosimetric readings but is also sensitive to mechanical and output changes in the proton therapy delivery system.« less

  6. 46 CFR 160.115-9 - Preapproval review.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.115-19 and 160.115-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR part 159, subpart 159.010. (d) Plan quality...

  7. 46 CFR 160.115-9 - Preapproval review.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.115-19 and 160.115-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR part 159, subpart 159.010. (d) Plan quality...

  8. 46 CFR 160.115-9 - Preapproval review.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... inventory of materials; (iii) The method for checking quality of fabrication and joints, including welding... §§ 160.115-19 and 160.115-21 of this subpart; (5) A description of the quality control procedures and... between the independent laboratory and Commandant under 46 CFR part 159, subpart 159.010. (d) Plan quality...

  9. Wheat Quality Council, Hard Spring Wheat Technical Committee, 2015 Crop

    USDA-ARS?s Scientific Manuscript database

    Nine experimental lines of hard spring wheat were grown at up to five locations in 2015 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Wheat samples were submitted through the Wheat Quality Council and processed and milled at the USDA-ARS Hard Red Sprin...

  10. Wheat Quality Council, Hard Spring Wheat Technical Committee, 2017 Crop

    USDA-ARS?s Scientific Manuscript database

    Nine experimental lines of hard spring wheat were grown at up to six locations in 2017 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Wheat samples were submitted through the Wheat Quality Council and processed and milled at the USDA-ARS Hard Red Spring...

  11. Wheat Quality Council, Hard Spring Wheat Technical Committee, 2014 Crop

    USDA-ARS?s Scientific Manuscript database

    Eleven experimental lines of hard spring wheat were grown at up to five locations in 2014 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Wheat samples were submitted through the Wheat Quality Council and processed and milled at the USDA-ARS Hard Red Spr...

  12. Qualities of Early Childhood Teachers: Reflections from Teachers and Administrators.

    ERIC Educational Resources Information Center

    Weitman, Catheryn J.; Humphries, Janie H.

    Data were collected from elementary school principals and kindergarten teachers in Texas and Louisiana in an effort to identify qualities that are thought to be important for kindergarten teachers. A questionnaire listing 462 qualities of early childhood teachers was compiled from literature reviews. Subjects were asked to check a maximum of 50…

  13. 40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...

  14. 40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...

  15. 40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...

  16. Automatic-Control System for Safer Brazing

    NASA Technical Reports Server (NTRS)

    Stein, J. A.; Vanasse, M. A.

    1986-01-01

    Automatic-control system for radio-frequency (RF) induction brazing of metal tubing reduces probability of operator errors, increases safety, and ensures high-quality brazed joints. Unit combines functions of gas control and electric-power control. Minimizes unnecessary flow of argon gas into work area and prevents electrical shocks from RF terminals. Controller will not allow power to flow from RF generator to brazing head unless work has been firmly attached to head and has actuated micro-switch. Potential shock hazard eliminated. Flow of argon for purging and cooling must be turned on and adjusted before brazing power applied. Provision ensures power not applied prematurely, causing damaged work or poor-quality joints. Controller automatically turns off argon flow at conclusion of brazing so potentially suffocating gas does not accumulate in confined areas.

  17. Image quality comparisons of X-Omat RP, L and B films.

    PubMed

    Van Dis, M L; Beck, F M

    1991-08-01

    The Eastman Kodak Company has recently developed a new film, X-Omat B (XB), designed to be interchangeable with X-Omat RP (XRP) film. The manufacturer claims the new film can be manually developed in half the time of other X-Omat films while automatic processing is unchanged. Three X-Omat film types were processed manually or automatically and the image qualities were evaluated. The XRP film had greater contrast than the XB and X-Omat L (XL) films when manually processed, and the XL film showed less contrast than the XB and XRP films when processed automatically. There was no difference in the subjective evaluation of the various film types and processing methods, and the XB film could be interchanged with XRP film in a simulated clinical situation.

  18. Automatically Grading Customer Confidence in a Formal Specification.

    ERIC Educational Resources Information Center

    Shukur, Zarina; Burke, Edmund; Foxley, Eric

    1999-01-01

    Describes an automatic grading system for a formal methods computer science course that is able to evaluate a formal specification written in the Z language. Quality is measured by considering first, specification correctness (syntax, semantics, and satisfaction of customer requirements), and second, specification maintainability (comparison of…

  19. Automated pharmaceutical tablet coating layer evaluation of optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G.; Buchsbaum, Andreas

    2015-03-01

    Film coating of pharmaceutical tablets is often applied to influence the drug release behaviour. The coating characteristics such as thickness and uniformity are critical quality parameters, which need to be precisely controlled. Optical coherence tomography (OCT) shows not only high potential for off-line quality control of film-coated tablets but also for in-line monitoring of coating processes. However, an in-line quality control tool must be able to determine coating thickness measurements automatically and in real-time. This study proposes an automatic thickness evaluation algorithm for bi-convex tables, which provides about 1000 thickness measurements within 1 s. Beside the segmentation of the coating layer, optical distortions due to refraction of the beam by the air/coating interface are corrected. Moreover, during in-line monitoring the tablets might be in oblique orientation, which needs to be considered in the algorithm design. Experiments were conducted where the tablet was rotated to specified angles. Manual and automatic thickness measurements were compared for varying coating thicknesses, angles of rotations, and beam displacements (i.e. lateral displacement between successive depth scans). The automatic thickness determination algorithm provides highly accurate results up to an angle of rotation of 30°. The computation time was reduced to 0.53 s for 700 thickness measurements by introducing feasibility constraints in the algorithm.

  20. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

Top