Garrido, Terhilda; Kumar, Sudheen; Lekas, John; Lindberg, Mark; Kadiyala, Dhanyaja; Whippy, Alan; Crawford, Barbara; Weissberg, Jed
2014-01-01
Using electronic health records (EHR) to automate publicly reported quality measures is receiving increasing attention and is one of the promises of EHR implementation. Kaiser Permanente has fully or partly automated six of 13 the joint commission measure sets. We describe our experience with automation and the resulting time savings: a reduction by approximately 50% of abstractor time required for one measure set alone (surgical care improvement project). However, our experience illustrates the gap between the current and desired states of automated public quality reporting, which has important implications for measure developers, accrediting entities, EHR vendors, public/private payers, and government. PMID:23831833
USDA-ARS?s Scientific Manuscript database
Using next-generation-sequencing technology to assess entire transcriptomes requires high quality starting RNA. Currently, RNA quality is routinely judged using automated microfluidic gel electrophoresis platforms and associated algorithms. Here we report that such automated methods generate false-n...
Automated Formative Feedback and Summative Assessment Using Individualised Spreadsheet Assignments
ERIC Educational Resources Information Center
Blayney, Paul; Freeman, Mark
2004-01-01
This paper reports on the effects of automating formative feedback at the student's discretion and automating summative assessment with individualised spreadsheet assignments. Quality learning outcomes are achieved when students adopt deep approaches to learning (Ramsden, 2003). Learning environments designed to align assessment to learning…
Medical ADP Systems: Automated Medical Records Hold Promise to Improve Patient Care
1991-01-01
automated medical records. The report discusses the potential benefits that automation could make to the quality of patient care and the factors that impede...information systems, but no organization has fully automated one of the most critical types of information, patient medical records. The patient medical record...its review of automated medical records. GAO’s objectives in this study were to identify the (1) benefits of automating patient records and (2) factors
Schaefer, Peter
2011-07-01
The purpose of bioanalysis in the pharmaceutical industry is to provide 'raw' data about the concentration of a drug candidate and its metabolites as input for studies of drug properties such as pharmacokinetic (PK), toxicokinetic, bioavailability/bioequivalence and other studies. Building a seamless workflow from the laboratory to final reports is an ongoing challenge for IT groups and users alike. In such a workflow, PK automation can provide companies with the means to vastly increase the productivity of their scientific staff while improving the quality and consistency of their reports on PK analyses. This report presents the concept and benefits of PK automation and discuss which features of an automated reporting workflow should be translated into software requirements that pharmaceutical companies can use to select or build an efficient and effective PK automation solution that best meets their needs.
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART III: PROGRAM DOCUMENTATION
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to proces...
RAVEN Quality Assurance Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Joshua Joseph
2015-09-01
This report discusses the quality assurance activities needed to raise the Quality Level of Risk Analysis in a Virtual Environment (RAVEN) from Quality Level 3 to Quality Level 2. This report also describes the general RAVEN quality assurance activities. For improving the quality, reviews of code changes have been instituted, more parts of testing have been automated, and improved packaging has been created. For upgrading the quality level, requirements have been created and the workflow has been improved.
Web Service for Positional Quality Assessment: the Wps Tier
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2015-08-01
In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Bonnie; Boddy, Mark; Doyle, Frank
This report presents the results of an expert study to identify research opportunities for Sensors & Automation, a sub-program of the U.S. Department of Energy (DOE) Industrial Technologies Program (ITP). The research opportunities are prioritized by realizable energy savings. The study encompasses the technology areas of industrial controls, information processing, automation, and robotics. These areas have been central areas of focus of many Industries of the Future (IOF) technology roadmaps. This report identifies opportunities for energy savings as a direct result of advances in these areas and also recognizes indirect means of achieving energy savings, such as product quality improvement,more » productivity improvement, and reduction of recycle.« less
Toward Machine Understanding of Information Quality.
ERIC Educational Resources Information Center
Tang, Rong; Ng, K. B.; Strzalkowski, Tomek; Kantor, Paul B.
2003-01-01
Reports preliminary results of a study to develop and automate new metrics for assessment of information quality in text documents, particularly in news. Through focus group studies, quality judgment experiments, and textual feature extraction and analysis, nine quality aspects were generated and applied in human assessments. Experiments were…
Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram
2016-01-01
Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Laboratory Automation and Middleware.
Riben, Michael
2015-06-01
The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.
Quality specification in haematology: the automated blood cell count.
Buttarello, Mauro
2004-08-02
Quality specifications for automated blood cell counts include topics that go beyond the traditional analytic stage (imprecision, inaccuracy, quality control) and extend to pre- and post-analytic phases. In this review pre-analytic aspects concerning the choice of anticoagulants, maximum conservation times and differences between storage at room temperature or at 4 degrees C are considered. For the analytic phase, goals for imprecision and bias obtained with various approaches (ratio to biologic variation, state of the art, specific clinical situations) are evaluated. For the post-analytic phase, medical review criteria (algorithm, decision limit and delta check) and the structure of the report (general part and comments), which constitutes the formal act through which a laboratory communicates with clinicians, are considered. K2EDTA is considered the anticoagulant of choice for automated cell counts. Regarding storage, specimens should be analyzed as soon as possible. Storage at 4 degrees C may stabilize specimens from 24 to 72 h when complete blood count (CBC) and differential leucocyte count (DLC) is performed. For precision, analytical goals based on the state of the art are acceptable while for bias this is satisfactory only for some parameters. In haematology quality specifications for pre- and analytical phases are important, but the review criteria and the quality of the report play a central role in assuring a definite clinical value.
Matheny, Michael E; Normand, Sharon-Lise T; Gross, Thomas P; Marinac-Dabic, Danica; Loyo-Berrios, Nilsa; Vidi, Venkatesan D; Donnelly, Sharon; Resnic, Frederic S
2011-12-14
Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery. A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year. We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80). The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
QCloud: A cloud-based quality control system for mass spectrometry-based proteomics laboratories
Chiva, Cristina; Olivella, Roger; Borràs, Eva; Espadas, Guadalupe; Pastor, Olga; Solé, Amanda
2018-01-01
The increasing number of biomedical and translational applications in mass spectrometry-based proteomics poses new analytical challenges and raises the need for automated quality control systems. Despite previous efforts to set standard file formats, data processing workflows and key evaluation parameters for quality control, automated quality control systems are not yet widespread among proteomics laboratories, which limits the acquisition of high-quality results, inter-laboratory comparisons and the assessment of variability of instrumental platforms. Here we present QCloud, a cloud-based system to support proteomics laboratories in daily quality assessment using a user-friendly interface, easy setup, automated data processing and archiving, and unbiased instrument evaluation. QCloud supports the most common targeted and untargeted proteomics workflows, it accepts data formats from different vendors and it enables the annotation of acquired data and reporting incidences. A complete version of the QCloud system has successfully been developed and it is now open to the proteomics community (http://qcloud.crg.eu). QCloud system is an open source project, publicly available under a Creative Commons License Attribution-ShareAlike 4.0. PMID:29324744
Byrne, M D; Jordan, T R; Welle, T
2013-01-01
The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.
Report #2005-1-00081, May 4, 2005. We identified the following reportable conditions: We could not assess the adequacy of automated controls. EPA needs to improve financial statement preparation and quality control.
Nonanalytic Laboratory Automation: A Quarter Century of Progress.
Hawker, Charles D
2017-06-01
Clinical laboratory automation has blossomed since the 1989 AACC meeting, at which Dr. Masahide Sasaki first showed a western audience what his laboratory had implemented. Many diagnostics and other vendors are now offering a variety of automated options for laboratories of all sizes. Replacing manual processing and handling procedures with automation was embraced by the laboratory community because of the obvious benefits of labor savings and improvement in turnaround time and quality. Automation was also embraced by the diagnostics vendors who saw automation as a means of incorporating the analyzers purchased by their customers into larger systems in which the benefits of automation were integrated to the analyzers.This report reviews the options that are available to laboratory customers. These options include so called task-targeted automation-modules that range from single function devices that automate single tasks (e.g., decapping or aliquoting) to multifunction workstations that incorporate several of the functions of a laboratory sample processing department. The options also include total laboratory automation systems that use conveyors to link sample processing functions to analyzers and often include postanalytical features such as refrigerated storage and sample retrieval.Most importantly, this report reviews a recommended process for evaluating the need for new automation and for identifying the specific requirements of a laboratory and developing solutions that can meet those requirements. The report also discusses some of the practical considerations facing a laboratory in a new implementation and reviews the concept of machine vision to replace human inspections. © 2017 American Association for Clinical Chemistry.
Haas, Brian J; Salzberg, Steven L; Zhu, Wei; Pertea, Mihaela; Allen, Jonathan E; Orvis, Joshua; White, Owen; Buell, C Robin; Wortman, Jennifer R
2008-01-01
EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation. PMID:18190707
Gabard-Durnam, Laurel J; Mendez Leal, Adriana S; Wilkinson, Carol L; Levin, April R
2018-01-01
Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and recording lengths. HAPPE software is freely available under the terms of the GNU General Public License at https://github.com/lcnhappe/happe.
Gabard-Durnam, Laurel J.; Mendez Leal, Adriana S.; Wilkinson, Carol L.; Levin, April R.
2018-01-01
Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and recording lengths. HAPPE software is freely available under the terms of the GNU General Public License at https://github.com/lcnhappe/happe. PMID:29535597
Warttig, Sheryl; Alderson, Phil; Evans, David Jw; Lewis, Sharon R; Kourbeti, Irene S; Smith, Andrew F
2018-06-25
Sepsis is a life-threatening condition that is usually diagnosed when a patient has a suspected or documented infection, and meets two or more criteria for systemic inflammatory response syndrome (SIRS). The incidence of sepsis is higher among people admitted to critical care settings such as the intensive care unit (ICU) than among people in other settings. If left untreated sepsis can quickly worsen; severe sepsis has a mortality rate of 40% or higher, depending on definition. Recognition of sepsis can be challenging as it usually requires patient data to be combined from multiple unconnected sources, and interpreted correctly, which can be complex and time consuming to do. Electronic systems that are designed to connect information sources together, and automatically collate, analyse, and continuously monitor the information, as well as alerting healthcare staff when pre-determined diagnostic thresholds are met, may offer benefits by facilitating earlier recognition of sepsis and faster initiation of treatment, such as antimicrobial therapy, fluid resuscitation, inotropes, and vasopressors if appropriate. However, there is the possibility that electronic, automated systems do not offer benefits, or even cause harm. This might happen if the systems are unable to correctly detect sepsis (meaning that treatment is not started when it should be, or it is started when it shouldn't be), or healthcare staff may not respond to alerts quickly enough, or get 'alarm fatigue' especially if the alarms go off frequently or give too many false alarms. To evaluate whether automated systems for the early detection of sepsis can reduce the time to appropriate treatment (such as initiation of antibiotics, fluids, inotropes, and vasopressors) and improve clinical outcomes in critically ill patients in the ICU. We searched CENTRAL; MEDLINE; Embase; CINAHL; ISI Web of science; and LILACS, clinicaltrials.gov, and the World Health Organization trials portal. We searched all databases from their date of inception to 18 September 2017, with no restriction on country or language of publication. We included randomized controlled trials (RCTs) that compared automated sepsis-monitoring systems to standard care (such as paper-based systems) in participants of any age admitted to intensive or critical care units for critical illness. We defined an automated system as any process capable of screening patient records or data (one or more systems) automatically at intervals for markers or characteristics that are indicative of sepsis. We defined critical illness as including, but not limited to postsurgery, trauma, stroke, myocardial infarction, arrhythmia, burns, and hypovolaemic or haemorrhagic shock. We excluded non-randomized studies, quasi-randomized studies, and cross-over studies . We also excluded studies including people already diagnosed with sepsis. We used the standard methodological procedures expected by Cochrane. Our primary outcomes were: time to initiation of antimicrobial therapy; time to initiation of fluid resuscitation; and 30-day mortality. Secondary outcomes included: length of stay in ICU; failed detection of sepsis; and quality of life. We used GRADE to assess the quality of evidence for each outcome. We included three RCTs in this review. It was unclear if the RCTs were three separate studies involving 1199 participants in total, or if they were reports from the same study involving fewer participants. We decided to treat the studies separately, as we were unable to make contact with the study authors to clarify.All three RCTs are of very low study quality because of issues with unclear randomization methods, allocation concealment and uncertainty of effect size. Some of the studies were reported as abstracts only and contained limited data, which prevented meaningful analysis and assessment of potential biases.The studies included participants who all received automated electronic monitoring during their hospital stay. Participants were randomized to an intervention group (automated alerts sent from the system) or to usual care (no automated alerts sent from the system).Evidence from all three studies reported 'Time to initiation of antimicrobial therapy'. We were unable to pool the data, but the largest study involving 680 participants reported median time to initiation of antimicrobial therapy in the intervention group of 5.6 hours (interquartile range (IQR) 2.3 to 19.7) in the intervention group (n = not stated) and 7.8 hours (IQR 2.5 to 33.1) in the control group (n = not stated).No studies reported 'Time to initiation of fluid resuscitation' or the adverse event 'Mortality at 30 days'. However very low-quality evidence was available where mortality was reported at other time points. One study involving 77 participants reported 14-day mortality of 20% in the intervention group and 21% in the control group (numerator and denominator not stated). One study involving 442 participants reported mortality at 28 days, or discharge was 14% in the intervention group and 10% in the control group (numerator and denominator not reported). Sample sizes were not reported adequately for these outcomes and so we could not estimate confidence intervals.Very low-quality evidence from one study involving 442 participants reported 'Length of stay in ICU'. Median length of stay was 3.0 days in the intervention group (IQR = 2.0 to 5.0), and 3.0 days (IQR 2.0 to 4.0 in the control).Very low-quality evidence from one study involving at least 442 participants reported the adverse effect 'Failed detection of sepsis'. Data were only reported for failed detection of sepsis in two participants and it wasn't clear which group(s) this outcome occurred in.No studies reported 'Quality of life'. It is unclear what effect automated systems for monitoring sepsis have on any of the outcomes included in this review. Very low-quality evidence is only available on automated alerts, which is only one component of automated monitoring systems. It is uncertain whether such systems can replace regular, careful review of the patient's condition by experienced healthcare staff.
Proteomics Quality Control: Quality Control Software for MaxQuant Results.
Bielow, Chris; Mastrobuoni, Guido; Kempa, Stefan
2016-03-04
Mass spectrometry-based proteomics coupled to liquid chromatography has matured into an automatized, high-throughput technology, producing data on the scale of multiple gigabytes per instrument per day. Consequently, an automated quality control (QC) and quality analysis (QA) capable of detecting measurement bias, verifying consistency, and avoiding propagation of error is paramount for instrument operators and scientists in charge of downstream analysis. We have developed an R-based QC pipeline called Proteomics Quality Control (PTXQC) for bottom-up LC-MS data generated by the MaxQuant software pipeline. PTXQC creates a QC report containing a comprehensive and powerful set of QC metrics, augmented with automated scoring functions. The automated scores are collated to create an overview heatmap at the beginning of the report, giving valuable guidance also to nonspecialists. Our software supports a wide range of experimental designs, including stable isotope labeling by amino acids in cell culture (SILAC), tandem mass tags (TMT), and label-free data. Furthermore, we introduce new metrics to score MaxQuant's Match-between-runs (MBR) functionality by which peptide identifications can be transferred across Raw files based on accurate retention time and m/z. Last but not least, PTXQC is easy to install and use and represents the first QC software capable of processing MaxQuant result tables. PTXQC is freely available at https://github.com/cbielow/PTXQC .
Byrne, M.D.; Jordan, T.R.; Welle, T.
2013-01-01
Objective The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. Methods A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Results Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 “false negative” patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Conclusion Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare. PMID:23650488
A Transparent and Transferable Framework for Tracking Quality Information in Large Datasets
Smith, Derek E.; Metzger, Stefan; Taylor, Jeffrey R.
2014-01-01
The ability to evaluate the validity of data is essential to any investigation, and manual “eyes on” assessments of data quality have dominated in the past. Yet, as the size of collected data continues to increase, so does the effort required to assess their quality. This challenge is of particular concern for networks that automate their data collection, and has resulted in the automation of many quality assurance and quality control analyses. Unfortunately, the interpretation of the resulting data quality flags can become quite challenging with large data sets. We have developed a framework to summarize data quality information and facilitate interpretation by the user. Our framework consists of first compiling data quality information and then presenting it through 2 separate mechanisms; a quality report and a quality summary. The quality report presents the results of specific quality analyses as they relate to individual observations, while the quality summary takes a spatial or temporal aggregate of each quality analysis and provides a summary of the results. Included in the quality summary is a final quality flag, which further condenses data quality information to assess whether a data product is valid or not. This framework has the added flexibility to allow “eyes on” information on data quality to be incorporated for many data types. Furthermore, this framework can aid problem tracking and resolution, should sensor or system malfunctions arise. PMID:25379884
ERIC Educational Resources Information Center
Song, Yi; Deane, Paul; Beigman Klebanov, Beata
2017-01-01
This project focuses on laying the foundations for automated analysis of argumentation schemes, supporting identification and classification of the arguments being made in a text, for the purpose of scoring the quality of written analyses of arguments. We developed annotation protocols for 20 argument prompts from a college-level test under the…
Monitoring the effects of highway construction over the Little River and Crane Creek.
DOT National Transportation Integrated Search
2005-09-08
This report summarizes the results of a two-year water quality monitoring project to document the effects of : the construction of the Highway 1 bypass on the water quality of Crane (Crains) Creek and the Little River. : Automated monitoring equipmen...
Practical Considerations for Optic Nerve Estimation in Telemedicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward
The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less
Automated water monitor system field demonstration test report. Volume 2: Technical summary
NASA Technical Reports Server (NTRS)
Brooks, R. L.; Jeffers, E. L.; Perreira, J.; Poel, J. D.; Nibley, D.; Nuss, R. H.
1981-01-01
The NASA Automatic Water Monitor System was installed in a water reclamation facility to evaluate the technical and cost feasibility of producing high quality reclaimed water. Data gathered during this field demonstration test are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykac, Deniz; Chaum, Edward; Fox, Karen
A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less
Automated Attitude Sensor Calibration: Progress and Plans
NASA Technical Reports Server (NTRS)
Sedlak, Joseph; Hashmall, Joseph
2004-01-01
This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.
Automated data mining of a proprietary database system for physician quality improvement.
Johnstone, Peter A S; Crenshaw, Tim; Cassels, Diane G; Fox, Timothy H
2008-04-01
Physician practice quality improvement is a subject of intense national debate. This report describes using a software data acquisition program to mine an existing, commonly used proprietary radiation oncology database to assess physician performance. Between 2003 and 2004, a manual analysis was performed of electronic portal image (EPI) review records. Custom software was recently developed to mine the record-and-verify database and the review process of EPI at our institution. In late 2006, a report was developed that allowed for immediate review of physician completeness and speed of EPI review for any prescribed period. The software extracted >46,000 EPIs between 2003 and 2007, providing EPI review status and time to review by each physician. Between 2003 and 2007, the department EPI review improved from 77% to 97% (range, 85.4-100%), with a decrease in the mean time to review from 4.2 days to 2.4 days. The initial intervention in 2003 to 2004 was moderately successful in changing the EPI review patterns; it was not repeated because of the time required to perform it. However, the implementation in 2006 of the automated review tool yielded a profound change in practice. Using the software, the automated chart review required approximately 1.5 h for mining and extracting the data for the 4-year period. This study quantified the EPI review process as it evolved during a 4-year period at our institution and found that automation of data retrieval and review simplified and facilitated physician quality improvement.
Structured reporting platform improves CAD-RADS assessment.
Szilveszter, Bálint; Kolossváry, Márton; Karády, Júlia; Jermendy, Ádám L; Károlyi, Mihály; Panajotu, Alexisz; Bagyura, Zsolt; Vecsey-Nagy, Milán; Cury, Ricardo C; Leipsic, Jonathon A; Merkely, Béla; Maurovich-Horvat, Pál
2017-11-01
Structured reporting in cardiac imaging is strongly encouraged to improve quality through consistency. The Coronary Artery Disease - Reporting and Data System (CAD-RADS) was recently introduced to facilitate interdisciplinary communication of coronary CT angiography (CTA) results. We aimed to assess the agreement between manual and automated CAD-RADS classification using a structured reporting platform. Five readers prospectively interpreted 500 coronary CT angiographies using a structured reporting platform that automatically calculates the CAD-RADS score based on stenosis and plaque parameters manually entered by the reader. In addition, all readers manually assessed CAD-RADS blinded to the automatically derived results, which was used as the reference standard. We evaluated factors influencing reader performance including CAD-RADS training, clinical load, time of the day and level of expertise. Total agreement between manual and automated classification was 80.2%. Agreement in stenosis categories was 86.7%, whereas the agreement in modifiers was 95.8% for "N", 96.8% for "S", 95.6% for "V" and 99.4% for "G". Agreement for V improved after CAD-RADS training (p = 0.047). Time of the day and clinical load did not influence reader performance (p > 0.05 both). Less experienced readers had a higher total agreement as compared to more experienced readers (87.0% vs 78.0%, respectively; p = 0.011). Even though automated CAD-RADS classification uses data filled in by the readers, it outperforms manual classification by preventing human errors. Structured reporting platforms with automated calculation of the CAD-RADS score might improve data quality and support standardization of clinical decision making. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Driving photomask supplier quality through automation
NASA Astrophysics Data System (ADS)
Russell, Drew; Espenscheid, Andrew
2007-10-01
In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.
de Brouwer, Hans; Stegeman, Gerrit
2011-02-01
To maximize utilization of expensive laboratory instruments and to make most effective use of skilled human resources, the entire chain of data processing, calculation, and reporting that is needed to transform raw NMR data into meaningful results was automated. The LEAN process improvement tools were used to identify non-value-added steps in the existing process. These steps were eliminated using an in-house developed software package, which allowed us to meet the key requirement of improving quality and reliability compared with the existing process while freeing up valuable human resources and increasing productivity. Reliability and quality were improved by the consistent data treatment as performed by the software and the uniform administration of results. Automating a single NMR spectrophotometer led to a reduction in operator time of 35%, doubling of the annual sample throughput from 1400 to 2800, and reducing the turn around time from 6 days to less than 2. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.
Measuring Up: Implementing a Dental Quality Measure in the Electronic Health Record Context
Bhardwaj, Aarti; Ramoni, Rachel; Kalenderian, Elsbeth; Neumann, Ana; Hebballi, Nutan B; White, Joel M; McClellan, Lyle; Walji, Muhammad F
2015-01-01
Background Quality improvement requires quality measures that are validly implementable. In this work, we assessed the feasibility and performance of an automated electronic Meaningful Use dental clinical quality measure (percentage of children who received fluoride varnish). Methods We defined how to implement the automated measure queries in a dental electronic health record (EHR). Within records identified through automated query, we manually reviewed a subsample to assess the performance of the query. Results The automated query found 71.0% of patients to have had fluoride varnish compared to 77.6% found using the manual chart review. The automated quality measure performance was 90.5% sensitivity, 90.8% specificity, 96.9% positive predictive value, and 75.2% negative predictive value. Conclusions Our findings support the feasibility of automated dental quality measure queries in the context of sufficient structured data. Information noted only in the free text rather than in structured data would require natural language processing approaches to effectively query. Practical Implications To participate in self-directed quality improvement, dental clinicians must embrace the accountability era. Commitment to quality will require enhanced documentation in order to support near-term automated calculation of quality measures. PMID:26562736
Benefits of an automated GLP final report preparation software solution.
Elvebak, Larry E
2011-07-01
The final product of analytical laboratories performing US FDA-regulated (or GLP) method validation and bioanalysis studies is the final report. Although there are commercial-off-the-shelf (COTS) software/instrument systems available to laboratory managers to automate and manage almost every aspect of the instrumental and sample-handling processes of GLP studies, there are few software systems available to fully manage the GLP final report preparation process. This lack of appropriate COTS tools results in the implementation of rather Byzantine and manual processes to cobble together all the information needed to generate a GLP final report. The manual nature of these processes results in the need for several iterative quality control and quality assurance events to ensure data accuracy and report formatting. The industry is in need of a COTS solution that gives laboratory managers and study directors the ability to manage as many portions as possible of the GLP final report writing process and the ability to generate a GLP final report with the click of a button. This article describes the COTS software features needed to give laboratory managers and study directors such a solution.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Automated water monitor system field demonstration test report. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Brooks, R. L.; Jeffers, E. L.; Perreira, J.; Poel, J. D.; Nibley, D.; Nuss, R. H.
1981-01-01
A system that performs water quality monitoring on-line and in real time much as it would be done in a spacecraft, was developed and demonstrated. The system has the capability to determine conformance to high effluent quality standards and to increase the potential for reclamation and reuse of water.
Garbage in, Garbage Stays: How ERPs Could Improve Our Data-Quality Issues
ERIC Educational Resources Information Center
Riccardi, Richard I.
2009-01-01
As universities begin to implement business intelligence tools such as end-user reporting, data warehousing, and dashboard indicators, data quality becomes an even greater and more public issue. With automated tools taking nightly snapshots of the database, the faulty data grow exponentially, propagating as another layer of the data warehouse.…
A pattern-based method to automate mask inspection files
NASA Astrophysics Data System (ADS)
Kamal Baharin, Ezni Aznida Binti; Muhsain, Mohamad Fahmi Bin; Ahmad Ibrahim, Muhamad Asraf Bin; Ahmad Noorhani, Ahmad Nurul Ihsan Bin; Sweis, Jason; Lai, Ya-Chieh; Hurat, Philippe
2017-03-01
Mask inspection is a critical step in the mask manufacturing process in order to ensure all dimensions printed are within the needed tolerances. This becomes even more challenging as the device nodes shrink and the complexity of the tapeout increases. Thus, the amount of measurement points and their critical dimension (CD) types are increasing to ensure the quality of the mask. In addition to the mask quality, there is a significant amount of manpower needed when the preparation and debugging of this process are not automated. By utilizing a novel pattern search technology with the ability to measure and report match region scan-line (edge) measurements, we can create a flow to find, measure and mark all metrology locations of interest and provide this automated report to the mask shop for inspection. A digital library is created based on the technology product and node which contains the test patterns to be measured. This paper will discuss how these digital libraries will be generated and then utilized. As a time-critical part of the manufacturing process, this can also reduce the data preparation cycle time, minimize the amount of manual/human error in naming and measuring the various locations, reduce the risk of wrong/missing CD locations, and reduce the amount of manpower needed overall. We will also review an example pattern and how the reporting structure to the mask shop can be processed. This entire process can now be fully automated.
Tuong, William; Wang, Audrey S.; Armstrong, April W.
2016-01-01
IMPORTANCE Effective patient education is necessary for treating patients with acne vulgaris. Automated online counseling simulates face-to-face encounters and may be a useful tool to deliver education. OBJECTIVE To compare the effectiveness of a standard educational website with that of an automated-counseling website in improving clinical outcomes and quality of life among adolescents with acne. DESIGN, SETTING, AND PARTICIPANTS Randomized clinical trial conducted between March 27, 2014, and June 27, 2014, including a 12-week follow-up in a local inner-city high school. Ninety-eight students aged at least 13 years with mild to moderate acne were eligible for participation. A per-protocol analysis of the evaluable population was conducted on clinical outcome data. INTERVENTIONS Participants viewed either a standard educational website or an automated-counseling website. MAIN OUTCOMES AND MEASURES The primary outcome was the total acne lesion count. Secondary measures included the Children’s Dermatology Life Quality Index (CDLQI) scores and general skin care behavior. RESULTS Forty-nine participants were randomized to each group. At baseline, the mean (SD) total acne lesion count was not significantly different between the standard-website group and the automated-counseling–website group (21.33 [10.81] vs 25.33 [12.45]; P = .10). Improvement in the mean (SD) acne lesion count was not significantly different between the standard-website group and the automated-counseling–website group (0.20 [9.26] vs 3.90 [12.19]; P = .10). The mean (SD) improvement in CDLQI score for the standard-website group was not significantly different from that of the automated-counseling–website group (0.17 [2.64] vs 0.39 [2.94]; P = .71). After 12 weeks, a greater proportion of participants in the automated-counseling–website group maintained or adopted a recommended anti-acne skin care routine compared with the standard-website group (43% vs 22%; P = .03). CONCLUSIONS AND RELEVANCE Internet-based acne education using automated counseling was not superior to standard-website education in improving acne severity and quality of life. However, a greater proportion of participants who viewed the automated-counseling website reported having maintained or adopted a recommended anti-acne skin care regimen. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT02031718 PMID:26017816
Bilimoria, Karl Y; Kmiecik, Thomas E; DaRosa, Debra A; Halverson, Amy; Eskandari, Mark K; Bell, Richard H; Soper, Nathaniel J; Wayne, Jeffrey D
2009-04-01
To design a Web-based system to track adverse and near-miss events, to establish an automated method to identify patterns of events, and to assess the adverse event reporting behavior of physicians. A Web-based system was designed to collect physician-reported adverse events including weekly Morbidity and Mortality (M&M) entries and anonymous adverse/near-miss events. An automated system was set up to help identify event patterns. Adverse event frequency was compared with hospital databases to assess reporting completeness. A metropolitan tertiary care center. Identification of adverse event patterns and completeness of reporting. From September 2005 to August 2007, 15,524 surgical patients were reported including 957 (6.2%) adverse events and 34 (0.2%) anonymous reports. The automated pattern recognition system helped identify 4 event patterns from M&M reports and 3 patterns from anonymous/near-miss reporting. After multidisciplinary meetings and expert reviews, the patterns were addressed with educational initiatives, correction of systems issues, and/or intensive quality monitoring. Only 25% of complications and 42% of inpatient deaths were reported. A total of 75.2% of adverse events resulting in permanent disability or death were attributed to the nature of the disease. Interventions to improve reporting were largely unsuccessful. We have developed a user-friendly Web-based system to track complications and identify patterns of adverse events. Underreporting of adverse events and attributing the complication to the nature of the disease represent a problem in reporting culture among surgeons at our institution. Similar systems should be used by surgery departments, particularly those affiliated with teaching hospitals, to identify quality improvement opportunities.
USDA-ARS?s Scientific Manuscript database
The large size and relative complexity of many plant genomes make creation, quality control, and dissemination of high-quality gene structure annotations challenging. In response, we have developed MAKER-P, a fast and easy-to-use genome annotation engine for plants. Here, we report the use of MAKER-...
NASA Technical Reports Server (NTRS)
Broderick, Ron
1997-01-01
The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network development. The changes were to include evaluation tools that can be applied to neural networks at each phase of the software engineering life cycle. The result was a formal evaluation approach to increase the product quality of systems that use neural networks for their implementation.
Nursing Home Staffing and Quality under the Nursing Home Reform Act
ERIC Educational Resources Information Center
Zhang, Xinzhi; Grabowski, David C.
2004-01-01
Purpose: We examine whether the Nursing Home Reform Act (NHRA) improved nursing home staffing and quality. Design and Methods: Data from 5,092 nursing homes were linked across the 1987 Medicare/Medicaid Automated Certification System and the 1993 Online Survey, Certification and Reporting system. A dummy-year model was used to examine the effects…
Kukhareva, Polina V; Kawamoto, Kensaku; Shields, David E; Barfuss, Darryl T; Halley, Anne M; Tippetts, Tyler J; Warner, Phillip B; Bray, Bruce E; Staes, Catherine J
2014-01-01
Electronic quality measurement (QM) and clinical decision support (CDS) are closely related but are typically implemented independently, resulting in significant duplication of effort. While it seems intuitive that technical approaches could be re-used across these two related use cases, such reuse is seldom reported in the literature, especially for standards-based approaches. Therefore, we evaluated the feasibility of using a standards-based CDS framework aligned with anticipated EHR certification criteria to implement electronic QM. The CDS-QM framework was used to automate a complex national quality measure (SCIP-VTE-2) at an academic healthcare system which had previously relied on time-consuming manual chart abstractions. Compared with 305 manually-reviewed reference cases, the recall of automated measurement was 100%. The precision was 96.3% (CI:92.6%-98.5%) for ascertaining the denominator and 96.2% (CI:92.3%-98.4%) for the numerator. We therefore validated that a standards-based CDS-QM framework can successfully enable automated QM, and we identified benefits and challenges with this approach. PMID:25954389
Fast, Automated, Photo realistic, 3D Modeling of Building Interiors
2016-09-12
project, we developed two algorithmic pipelines for GPS-denied indoor mobile 3D mapping using an ambulatory backpack system. By mounting scanning...equipment on a backpack system, a human operator can traverse the interior of a building to produce a high-quality 3D reconstruction. In each of our...Unlimited UU UU UU UU 12-09-2016 1-May-2011 30-Jun-2015 Final Report: Fast, Automated, Photo-realistic, 3D Modeling of Building Interiors (ATTN
Does Automated Feedback Improve Writing Quality?
ERIC Educational Resources Information Center
Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.
2014-01-01
The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…
Technology Transfer Opportunities: Automated Ground-Water Monitoring
Smith, Kirk P.; Granato, Gregory E.
1997-01-01
Introduction A new automated ground-water monitoring system developed by the U.S. Geological Survey (USGS) measures and records values of selected water-quality properties and constituents using protocols approved for manual sampling. Prototypes using the automated process have demonstrated the ability to increase the quantity and quality of data collected and have shown the potential for reducing labor and material costs for ground-water quality data collection. Automation of water-quality monitoring systems in the field, in laboratories, and in industry have increased data density and utility while reducing operating costs. Uses for an automated ground-water monitoring system include, (but are not limited to) monitoring ground-water quality for research, monitoring known or potential contaminant sites, such as near landfills, underground storage tanks, or other facilities where potential contaminants are stored, and as an early warning system monitoring groundwater quality near public water-supply wells.
Measuring up: Implementing a dental quality measure in the electronic health record context.
Bhardwaj, Aarti; Ramoni, Rachel; Kalenderian, Elsbeth; Neumann, Ana; Hebballi, Nutan B; White, Joel M; McClellan, Lyle; Walji, Muhammad F
2016-01-01
Quality improvement requires using quality measures that can be implemented in a valid manner. Using guidelines set forth by the Meaningful Use portion of the Health Information Technology for Economic and Clinical Health Act, the authors assessed the feasibility and performance of an automated electronic Meaningful Use dental clinical quality measure to determine the percentage of children who received fluoride varnish. The authors defined how to implement the automated measure queries in a dental electronic health record. Within records identified through automated query, the authors manually reviewed a subsample to assess the performance of the query. The automated query results revealed that 71.0% of patients had fluoride varnish compared with the manual chart review results that indicated 77.6% of patients had fluoride varnish. The automated quality measure performance results indicated 90.5% sensitivity, 90.8% specificity, 96.9% positive predictive value, and 75.2% negative predictive value. The authors' findings support the feasibility of using automated dental quality measure queries in the context of sufficient structured data. Information noted only in free text rather than in structured data would require using natural language processing approaches to effectively query electronic health records. To participate in self-directed quality improvement, dental clinicians must embrace the accountability era. Commitment to quality will require enhanced documentation to support near-term automated calculation of quality measures. Copyright © 2016 American Dental Association. Published by Elsevier Inc. All rights reserved.
An ultraviolet-visible spectrophotometer automation system. Part 3: Program documentation
NASA Astrophysics Data System (ADS)
Roth, G. S.; Teuschler, J. M.; Budde, W. L.
1982-07-01
The Ultraviolet-Visible Spectrophotometer (UVVIS) automation system accomplishes 'on-line' spectrophotometric quality assurance determinations, report generations, plot generations and data reduction for chlorophyll or color analysis. This system also has the capability to process manually entered data for the analysis of chlorophyll or color. For each program of the UVVIS system, this document contains a program description, flowchart, variable dictionary, code listing, and symbol cross-reference table. Also included are descriptions of file structures and of routines common to all automated analyses. The programs are written in Data General extended BASIC, Revision 4.3, under the RDOS operating systems, Revision 6.2. The BASIC code has been enhanced for real-time data acquisition, which is accomplished by CALLS to assembly language subroutines. Two other related publications are 'An Ultraviolet-Visible Spectrophotometer Automation System - Part I Functional Specifications,' and 'An Ultraviolet-Visible Spectrophotometer Automation System - Part II User's Guide.'
Does bacteriology laboratory automation reduce time to results and increase quality management?
Dauwalder, O; Landrieve, L; Laurent, F; de Montclos, M; Vandenesch, F; Lina, G
2016-03-01
Due to reductions in financial and human resources, many microbiological laboratories have merged to build very large clinical microbiology laboratories, which allow the use of fully automated laboratory instruments. For clinical chemistry and haematology, automation has reduced the time to results and improved the management of laboratory quality. The aim of this review was to examine whether fully automated laboratory instruments for microbiology can reduce time to results and impact quality management. This study focused on solutions that are currently available, including the BD Kiestra™ Work Cell Automation and Total Lab Automation and the Copan WASPLab(®). Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Quality Assurance and T&E of Inertial Systems for RLV Mission
NASA Astrophysics Data System (ADS)
Sathiamurthi, S.; Thakur, Nayana; Hari, K.; Peter, Pilmy; Biju, V. S.; Mani, K. S.
2017-12-01
This work describes the quality assurance and Test and Evaluation (T&E) activities carried out for the inertial systems flown successfully in India's first reusable launch vehicle technology demonstrator hypersonic experiment mission. As part of reliability analysis, failure mode effect and criticality analysis and derating analysis were carried out in the initial design phase, findings presented to design review forums and the recommendations were implemented. T&E plan was meticulously worked out and presented to respective forums for review and implementation. Test data analysis, health parameter plotting and test report generation was automated and these automations significantly reduced the time required for these activities and helped to avoid manual errors. Further, T&E cycle is optimized without compromising on quality aspects. These specific measures helped to achieve zero defect delivery of inertial systems for RLV application.
Garvin, Jennifer H; DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K
2012-01-01
Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). An EF value of <40% can be accurately identified in VA echocardiogram reports. An automated information extraction system can be used to accurately extract EF for quality measurement.
NASA Astrophysics Data System (ADS)
Gorlach, Igor; Wessel, Oliver
2008-09-01
In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.
Smits, Loek P.; van Wijk, Diederik F.; Duivenvoorden, Raphael; Xu, Dongxiang; Yuan, Chun; Stroes, Erik S.; Nederveen, Aart J.
2016-01-01
Purpose To study the interscan reproducibility of manual versus automated segmentation of carotid artery plaque components, and the agreement between both methods, in high and lower quality MRI scans. Methods 24 patients with 30–70% carotid artery stenosis were planned for 3T carotid MRI, followed by a rescan within 1 month. A multicontrast protocol (T1w,T2w, PDw and TOF sequences) was used. After co-registration and delineation of the lumen and outer wall, segmentation of plaque components (lipid-rich necrotic cores (LRNC) and calcifications) was performed both manually and automated. Scan quality was assessed using a visual quality scale. Results Agreement for the detection of LRNC (Cohen’s kappa (k) is 0.04) and calcification (k = 0.41) between both manual and automated segmentation methods was poor. In the high-quality scans (visual quality score ≥ 3), the agreement between manual and automated segmentation increased to k = 0.55 and k = 0.58 for, respectively, the detection of LRNC and calcification larger than 1 mm2. Both manual and automated analysis showed good interscan reproducibility for the quantification of LRNC (intraclass correlation coefficient (ICC) of 0.94 and 0.80 respectively) and calcified plaque area (ICC of 0.95 and 0.77, respectively). Conclusion Agreement between manual and automated segmentation of LRNC and calcifications was poor, despite a good interscan reproducibility of both methods. The agreement between both methods increased to moderate in high quality scans. These findings indicate that image quality is a critical determinant of the performance of both manual and automated segmentation of carotid artery plaque components. PMID:27930665
A method to establish seismic noise baselines for automated station assessment
McNamara, D.E.; Hutt, C.R.; Gee, L.S.; Benz, H.M.; Buland, R.P.
2009-01-01
We present a method for quantifying station noise baselines and characterizing the spectral shape of out-of-nominal noise sources. Our intent is to automate this method in order to ensure that only the highest-quality data are used in rapid earthquake products at NEIC. In addition, the station noise baselines provide a valuable tool to support the quality control of GSN and ANSS backbone data and metadata. The procedures addressed here are currently in development at the NEIC, and work is underway to understand how quickly changes from nominal can be observed and used within the NEIC processing framework. The spectral methods and software used to compute station baselines and described herein (PQLX) can be useful to both permanent and portable seismic stations operators. Applications include: general seismic station and data quality control (QC), evaluation of instrument responses, assessment of near real-time communication system performance, characterization of site cultural noise conditions, and evaluation of sensor vault design, as well as assessment of gross network capabilities (McNamara et al. 2005). Future PQLX development plans include incorporating station baselines for automated QC methods and automating station status report generation and notification based on user-defined QC parameters. The PQLX software is available through the USGS (http://earthquake. usgs.gov/research/software/pqlx.php) and IRIS (http://www.iris.edu/software/ pqlx/).
The value proposition of structured reporting in interventional radiology.
Durack, Jeremy C
2014-10-01
The purposes of this article are to provide a brief overview of structured radiology reporting and to emphasize the anticipated benefits from a new generation of standardized interventional radiology procedure reports. Radiology reporting standards and tools have evolved to enable automated data integration from multiple institutions using structured templates. In interventional radiology, data aggregated into clinical, research and quality registries from enriched structured reports could firmly establish the interventional radiology value proposition.
Information technology principles for management, reporting, and research.
Gillam, Michael; Rothenhaus, Todd; Smith, Vernon; Kanhouwa, Meera
2004-11-01
Information technology holds the promise to enhance the ability of individuals and organizations to manage emergency departments, improve data sharing and reporting, and facilitate research. The Society for Academic Emergency Medicine (SAEM) Consensus Committee has identified nine principles to outline a path of optimal features and designs for current and future information technology systems. The principles roughly summarized include the following: utilize open database standards with clear data dictionaries, provide administrative access to necessary data, appoint and recognize individuals with emergency department informatics expertise, allow automated alert and proper identification for enrollment of cases into research, provide visual and statistical tools and training to analyze data, embed automated configurable alarm functionality for clinical and nonclinical systems, allow multiexport standard and format configurable reporting, strategically acquire mission-critical equipment that is networked and capable of automated feedback regarding functional status and location, and dedicate resources toward informatics research and development. The SAEM Consensus Committee concludes that the diligent application of these principles will enhance emergency department management, reporting, and research and ultimately improve the quality of delivered health care.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
Clos, Lawrence J; Jofre, M Fransisca; Ellinger, James J; Westler, William M; Markley, John L
2013-06-01
To facilitate the high-throughput acquisition of nuclear magnetic resonance (NMR) experimental data on large sets of samples, we have developed a simple and straightforward automated methodology that capitalizes on recent advances in Bruker BioSpin NMR spectrometer hardware and software. Given the daunting challenge for non-NMR experts to collect quality spectra, our goal was to increase user accessibility, provide customized functionality, and improve the consistency and reliability of resultant data. This methodology, NMRbot, is encoded in a set of scripts written in the Python programming language accessible within the Bruker BioSpin TopSpin ™ software. NMRbot improves automated data acquisition and offers novel tools for use in optimizing experimental parameters on the fly. This automated procedure has been successfully implemented for investigations in metabolomics, small-molecule library profiling, and protein-ligand titrations on four Bruker BioSpin NMR spectrometers at the National Magnetic Resonance Facility at Madison. The investigators reported benefits from ease of setup, improved spectral quality, convenient customizations, and overall time savings.
ERIC Educational Resources Information Center
Deane, Paul
2014-01-01
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Technology Transfer Opportunities: Automated Ground-Water Monitoring, A Proven Technology
Smith, Kirk P.; Granato, Gregory E.
1998-01-01
Introduction The U.S. Geological Survey (USGS) has developed and tested an automated ground-water monitoring system that measures and records values of selected water-quality properties and constituents using protocols approved for manual sampling. Prototypes using the automated process have demonstrated the ability to increase the quantity and quality of data collected and have shown the potential for reducing labor and material costs for ground-water quality data collection. Automated ground-water monitoring systems can be used to monitor known or potential contaminant sites, such as near landfills, underground storage tanks, or other facilities where potential contaminants are stored, to serve as early warning systems monitoring ground-water quality near public water-supply wells, and for ground-water quality research.
Ehrenfeld, Jesse M; McEvoy, Matthew D; Furman, William R; Snyder, Dylan; Sandberg, Warren S
2014-01-01
Anesthesiology residencies are developing trainee assessment tools to evaluate 25 milestones that map to the six core competencies. The effort will be facilitated by development of automated methods to capture, assess, and report trainee performance to program directors, the Accreditation Council for Graduate Medical Education and the trainees themselves. The authors leveraged a perioperative information management system to develop an automated, near-real-time performance capture and feedback tool that provides objective data on clinical performance and requires minimal administrative effort. Before development, the authors surveyed trainees about satisfaction with clinical performance feedback and about preferences for future feedback. Resident performance on 24,154 completed cases has been incorporated into the authors' automated dashboard, and trainees now have access to their own performance data. Eighty percent (48 of 60) of the residents responded to the feedback survey. Overall, residents "agreed/strongly agreed" that they desire frequent updates on their clinical performance on defined quality metrics and that they desired to see how they compared with the residency as a whole. Before deployment of the new tool, they "disagreed" that they were receiving feedback in a timely manner. Survey results were used to guide the format of the feedback tool that has been implemented. The authors demonstrate the implementation of a system that provides near-real-time feedback concerning resident performance on an extensible series of quality metrics, and which is responsive to requests arising from resident feedback about desired reporting mechanisms.
Ehrenfeld, Jesse M.; McEvoy, Matthew D.; Furman, William R.; Snyder, Dylan; Sandberg, Warren S.
2014-01-01
Background Anesthesiology residencies are developing trainee assessment tools to evaluate 25 milestones that map to the 6 core competencies. The effort will be facilitated by development of automated methods to capture, assess, and report trainee performance to program directors, the Accreditation Council for Graduate Medical Education and the trainees themselves. Methods We leveraged a perioperative information management system to develop an automated, near-real-time performance capture and feedback tool that provides objective data on clinical performance and requires minimal administrative effort. Prior to development, we surveyed trainees about satisfaction with clinical performance feedback and about preferences for future feedback. Results Resident performance on 24,154 completed cases has been incorporated into our automated dashboard, and trainees now have access to their own performance data. Eighty percent (48 of 60) of our residents responded to the feedback survey. Overall, residents ‘agreed/strongly agreed’ that they desire frequent updates on their clinical performance on defined quality metrics and that they desired to see how they compared to the residency as a whole. Prior to deployment of the new tool, they ‘disagreed’ that they were receiving feedback in a timely manner. Survey results were used to guide the format of the feedback tool that has been implemented. Conclusions We demonstrate the implementation of a system that provides near real-time feedback concerning resident performance on an extensible series of quality metrics, and which is responsive to requests arising from resident feedback about desired reporting mechanisms. PMID:24398735
NASA Technical Reports Server (NTRS)
Chapman, K. B.; Cox, C. M.; Thomas, C. W.; Cuevas, O. O.; Beckman, R. M.
1994-01-01
The Flight Dynamics Facility (FDF) at the NASA Goddard Space Flight Center (GSFC) generates numerous products for NASA-supported spacecraft, including the Tracking and Data Relay Satellites (TDRS's), the Hubble Space Telescope (HST), the Extreme Ultraviolet Explorer (EUVE), and the space shuttle. These products include orbit determination data, acquisition data, event scheduling data, and attitude data. In most cases, product generation involves repetitive execution of many programs. The increasing number of missions supported by the FDF has necessitated the use of automated systems to schedule, execute, and quality assure these products. This automation allows the delivery of accurate products in a timely and cost-efficient manner. To be effective, these systems must automate as many repetitive operations as possible and must be flexible enough to meet changing support requirements. The FDF Orbit Determination Task (ODT) has implemented several systems that automate product generation and quality assurance (QA). These systems include the Orbit Production Automation System (OPAS), the New Enhanced Operations Log (NEOLOG), and the Quality Assurance Automation Software (QA Tool). Implementation of these systems has resulted in a significant reduction in required manpower, elimination of shift work and most weekend support, and improved support quality, while incurring minimal development cost. This paper will present an overview of the concepts used and experiences gained from the implementation of these automation systems.
Test/score/report: Simulation techniques for automating the test process
NASA Technical Reports Server (NTRS)
Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.
1994-01-01
A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.
DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K
2012-01-01
Objectives Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. Materials and methods We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. Results System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). Discussion An EF value of <40% can be accurately identified in VA echocardiogram reports. Conclusions An automated information extraction system can be used to accurately extract EF for quality measurement. PMID:22437073
López-Tarjuelo, Juan; Bouché-Babiloni, Ana; Santos-Serra, Agustín; Morillo-Macías, Virginia; Calvo, Felipe A; Kubyshin, Yuri; Ferrer-Albiach, Carlos
2014-11-01
Industrial companies use failure mode and effect analysis (FMEA) to improve quality. Our objective was to describe an FMEA and subsequent interventions for an automated intraoperative electron radiotherapy (IOERT) procedure with computed tomography simulation, pre-planning, and a fixed conventional linear accelerator. A process map, an FMEA, and a fault tree analysis are reported. The equipment considered was the radiance treatment planning system (TPS), the Elekta Precise linac, and TN-502RDM-H metal-oxide-semiconductor-field-effect transistor in vivo dosimeters. Computerized order-entry and treatment-automation were also analyzed. Fifty-seven potential modes and effects were identified and classified into 'treatment cancellation' and 'delivering an unintended dose'. They were graded from 'inconvenience' or 'suboptimal treatment' to 'total cancellation' or 'potentially wrong' or 'very wrong administered dose', although these latter effects were never experienced. Risk priority numbers (RPNs) ranged from 3 to 324 and totaled 4804. After interventions such as double checking, interlocking, automation, and structural changes the final total RPN was reduced to 1320. FMEA is crucial for prioritizing risk-reduction interventions. In a semi-surgical procedure like IOERT double checking has the potential to reduce risk and improve quality. Interlocks and automation should also be implemented to increase the safety of the procedure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Quality Work, Quality Control in Technical Services.
ERIC Educational Resources Information Center
Horny, Karen L.
1985-01-01
Quality in library technical services is explored in light of changes produced by automation. Highlights include a definition of quality; new opportunities and shifting priorities; cataloging (fullness of records, heading consistency, accountability, local standards, automated checking); need for new skills (management, staff); and boons of…
Wang, Jin; Patel, Vimal; Burns, Daniel; Laycock, John; Pandya, Kinnari; Tsoi, Jennifer; DeSilva, Binodh; Ma, Mark; Lee, Jean
2013-07-01
Regulated bioanalytical laboratories that run ligand-binding assays in support of biotherapeutics development face ever-increasing demand to support more projects with increased efficiency. Laboratory automation is a tool that has the potential to improve both quality and efficiency in a bioanalytical laboratory. The success of laboratory automation requires thoughtful evaluation of program needs and fit-for-purpose strategies, followed by pragmatic implementation plans and continuous user support. In this article, we present the development of fit-for-purpose automation of total walk-away and flexible modular modes. We shared the sustaining experience of vendor collaboration and team work to educate, promote and track the use of automation. The implementation of laboratory automation improves assay performance, data quality, process efficiency and method transfer to CRO in a regulated bioanalytical laboratory environment.
ERIC Educational Resources Information Center
Howden, Norman
1987-01-01
Reports the results of a literature review and a survey of catalogers which were conducted to study the problem of the decline in quantity and quality of applications for entry-level cataloging jobs. Factors studied included: competition between types of library professionals, automation, library education, the women's movement, and library…
Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald
2017-01-01
ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.
Implementation of and experiences with new automation
Mahmud, Ifte; Kim, David
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products. PMID:18924695
Implementation of and experiences with new automation.
Mahmud, I; Kim, D
2000-01-01
In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at 'get-go', we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Hadjiiski, Lubomir; Liu, Jordan; Chan, Heang-Ping; Zhou, Chuan; Wei, Jun; Chughtai, Aamer; Kuriakose, Jean; Agarwal, Prachi; Kazerooni, Ella
2016-01-01
The detection of stenotic plaques strongly depends on the quality of the coronary arterial tree imaged with coronary CT angiography (cCTA). However, it is time consuming for the radiologist to select the best-quality vessels from the multiple-phase cCTA for interpretation in clinical practice. We are developing an automated method for selection of the best-quality vessels from coronary arterial trees in multiple-phase cCTA to facilitate radiologist's reading or computerized analysis. Our automated method consists of vessel segmentation, vessel registration, corresponding vessel branch matching, vessel quality measure (VQM) estimation, and automatic selection of best branches based on VQM. For every branch, the VQM was calculated as the average radial gradient. An observer preference study was conducted to visually compare the quality of the selected vessels. 167 corresponding branch pairs were evaluated by two radiologists. The agreement between the first radiologist and the automated selection was 76% with kappa of 0.49. The agreement between the second radiologist and the automated selection was also 76% with kappa of 0.45. The agreement between the two radiologists was 81% with kappa of 0.57. The observer preference study demonstrated the feasibility of the proposed automated method for the selection of the best-quality vessels from multiple cCTA phases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna Helene; Ahmad Al Rashdan; Le Blanc, Katya Lee
The goal of the Automated Work Packages (AWP) project is to demonstrate how to enhance work quality, cost management, and nuclear safety through the use of advanced technology. The work described in this report is part of the digital architecture for a highly automated plant project of the technical program plan for advanced instrumentation, information, and control (II&C) systems technologies. This report addresses the DOE Milestone M2LW-15IN0603112: Describe the outcomes of field evaluations/demonstrations of the AWP prototype system and plant surveillance and communication framework requirements at host utilities. A brief background to the need for AWP research is provided, thenmore » two human factors field evaluation studies are described. These studies focus on the user experience of conducting a task (in this case a preventive maintenance and a surveillance test) while using an AWP system. The remaining part of the report describes an II&C effort to provide real time status updates to the technician by wireless transfer of equipment indications and a dynamic user interface.« less
Automating annotation of information-giving for analysis of clinical conversation.
Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn
2014-02-01
Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.
Lexical Diversity in Writing and Speaking Task Performances
ERIC Educational Resources Information Center
Yu, Guoxing
2010-01-01
In the rating scales of major international language tests, as well as in automated evaluation systems (e.g. e-rater), a positive relationship is often claimed between lexical diversity, holistic quality of written or spoken discourses, and language proficiency of candidates. This article reports a "posteriori" validation study that analysed a…
A Validity-Based Approach to Quality Control and Assurance of Automated Scoring
ERIC Educational Resources Information Center
Bejar, Isaac I.
2011-01-01
Automated scoring of constructed responses is already operational in several testing programmes. However, as the methodology matures and the demand for the utilisation of constructed responses increases, the volume of automated scoring is likely to increase at a fast pace. Quality assurance and control of the scoring process will likely be more…
Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S
2016-01-01
High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covington, E; Younge, K; Chen, X
Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One examplemore » is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.« less
Fidalgo, Bruno M R; Crabb, David P; Lawrenson, John G
2015-05-01
To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
NASA Technical Reports Server (NTRS)
Modesitt, Kenneth L.
1987-01-01
Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.
In-situ Frequency Dependent Dielectric Sensing of Cure
NASA Technical Reports Server (NTRS)
Kranbuehl, David E.
1996-01-01
With the expanding use of polymeric materials as composite matrices, adhesives, coatings and films, the need to develop low cost, automated fabrication processes to produce consistently high quality parts is critical. Essential to the development of reliable, automated, intelligent processing is the ability to continuously monitor the changing state of the polymeric resin in-situ in the fabrication tool. This final report discusses work done on developing dielectric sensing to monitor polymeric material cure and which provides a fundamental understanding of the underlying science for the use of frequency dependent dielectri sensors to monitor the cure process.
Computer Automated Ultrasonic Inspection System
1985-02-06
Reports 74 3.1.4 Statistical Analysis Capability 74 3.2 Nondestructive Evaluation Terminal Hardware 76 3.3 Nondestructive Evaluation Terminal Vendor...3.4.2.6 Create a Hold Tape 103 vi TABLE OF CONTENTS SECTION PAGE 3.4.3 System Status 104 3.4.4 Statistical Analysis 105 3.4.4.1 Statistical Analysis...Data Extraction 105 3.4.4.2 Statistical Analysis Report and Display Generation 106 3.4.5 Quality Assurance Reports 106 3.4.6 Nondestructive Inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdie, Thomas G., E-mail: Tom.Purdie@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Techna Institute, University Health Network, Toronto, Ontario
Purpose: To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to definemore » and simplify the technical aspects of the treatment planning process. Results: Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Conclusions: Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use.« less
Purdie, Thomas G; Dinniwell, Robert E; Fyles, Anthony; Sharpe, Michael B
2014-11-01
To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to define and simplify the technical aspects of the treatment planning process. Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use. Copyright © 2014 Elsevier Inc. All rights reserved.
Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott
2018-05-01
The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.
NASA Technical Reports Server (NTRS)
Morgan, E. L.; Young, R. C.; Smith, M. D.; Eagleson, K. W.
1986-01-01
The objective of this study was to evaluate proposed design characteristics and applications of automated biomonitoring devices for real-time toxicity detection in water quality control on-board permanent space stations. Simulated tests in downlinking transmissions of automated biomonitoring data to Earth-receiving stations were simulated using satellite data transmissions from remote Earth-based stations.
System for Computer Automated Typesetting ((SCAT) of Computer Authored Texts.
1980-07-01
95 APPENDIX D - CODE SETS. .. .. .... ...... ..... ....... 101 APPENDIX E - SAMPLE OF PROGRAMMED INSTRUCTION DEMONSTRATING THE USE OF TYPOGRAPHY ...demonstrates the quality of typography available. 1discussion of typesetting in general is beyond the scope of this report. The reader not familiar with...INSTRUCTION DEMONSTRATING THE USE OF TYPOGRAPHY 107 1’p TAEG Report No. 8 PRESENT WEATHER SYMBOLS Symbolic Numbers NWS-AG-A-090 November 1979 This program was
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use.
The verification testing was conducted at the Cl facility in North Las Vegas, NV, on July 17 and 18, 2001. During this period, engine emissions, fuel consumption, and fuel quality were evaluated with contaminated and cleaned fuel.
To facilitate this verification, JCH repre...
Automated Detection of Surgical Adverse Events from Retrospective Clinical Data
ERIC Educational Resources Information Center
Hu, Zhen
2017-01-01
The Detection of surgical adverse events has become increasingly important with the growing demand for quality improvement and public health surveillance with surgery. Event reporting is one of the key steps in determining the impact of postoperative complications from a variety of perspectives and is an integral component of improving…
The effect of JPEG compression on automated detection of microaneurysms in retinal images
NASA Astrophysics Data System (ADS)
Cree, M. J.; Jelinek, H. F.
2008-02-01
As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.
Zeng, G; Murphy, J; Annis, S-L; Wu, X; Wang, Y; McGowan, T; Macpherson, M
2012-07-01
To report a quality control program in prostate radiation therapy at our center that includes semi-automated planning process to generate high quality plans and in-house software to track plan quality in the subsequent clinical application. Arc planning in Eclipse v10.0 was preformed for both intact prostate and post-prostatectomy treatments. The planning focuses on DVH requirements and dose distributions being able to tolerate daily setup variations. A modified structure set is used to standardize the optimization, including short rectum and bladder in the fields to effectively tighten dose to target and a rectum expansion with 1cm cropped from PTV to block dose and shape posterior isodose lines. Structure, plan and optimization templates are used to streamline plan generation. DVH files are exported from Eclipse to a quality tracking software with GUI written in Matlab that can report the dose-volume data either for an individual patient or over a patient population. For 100 intact prostate patients treated with 78Gy, rectal D50, D25, D15 and D5 are 30.1±6.2Gy, 50.6±7.9Gy, 65.9±6.0Gy and 76.6±1.4Gy respectively, well below the limits 50Gy, 65Gy, 75Gy and 78Gy respectively. For prostate bed with prescription of 66Gy, rectal D50 is 35.9±6.9Gy. In both sites, PTV is covered by 95% prescription and the hotspots are less than 5%. The semi-automated planning method can efficiently create high quality plans while the tracking software can monitor the feedback from clinical application. It is a comprehensive and robust quality control program in radiation therapy. © 2012 American Association of Physicists in Medicine.
Quality of dry chemistry testing.
Nakamura, H; Tatsumi, N
1999-01-01
Since the development of the qualitative test paper for urine in 1950s, several kinds of dry-state-reagents and their automated analyzers have been developed. "Dry chemistry" has become to be called since the report on the development of quantitative test paper for serum bilirubin with reflectometer in the end of 1960s and dry chemistry has been world widely known since the presentation on the development of multilayer film reagent for serum biochemical analytes by Eastman Kodak Co at the 10th IFCC Meeting in the end of 1970s. We have reported test menu, results in external quality assessment, merits and demerits, and the future possibilities of dry chemistry.
1986-06-01
GUIDE JJ TRIMIS Program Office 5401 Westbard Avenue Bethesda, Maryland 20816 W" V NDC Federal Systems. lnc.._" 1300 Piccard Drive .. -Rockville, Marylcn...Project/Teay iUnft1. DOD A A fri-Service Medical Information Systems Program office 11. Contract(C) orGaw a 5401 Westbard Avenue (C) Bethesda, MD 20816
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-28
... Children (WIC) Forms: FNS-698, FNS-699, and FNS-700; The Integrity Profile (TIP) AGENCY: Food and Nutrition..., including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and... Marianas, and the Virgin Islands. The reporting burden consists of three automated forms, the FNS-698, FNS...
Mischnik, Alexander; Mieth, Markus; Busch, Cornelius J; Hofer, Stefan; Zimmermann, Stefan
2012-08-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis.
Mieth, Markus; Busch, Cornelius J.; Hofer, Stefan; Zimmermann, Stefan
2012-01-01
Automation of plate streaking is ongoing in clinical microbiological laboratories, but evaluation for routine use is mostly open. In the present study, the recovery of microorganisms from the Previ Isola system plated polyurethane (PU) swab samples is compared to manually plated control viscose swab samples from wounds according to the CLSI procedure M40-A (quality control of microbiological transport systems). One hundred twelve paired samples (224 swabs) were analyzed. In 80/112 samples (71%), concordant culture results were obtained with the two methods. In 32/112 samples (29%), CFU recovery of microorganisms from the two methods was discordant. In 24 (75%) of the 32 paired samples with a discordant result, Previ Isola plated PU swabs were superior. In 8 (25%) of the 32 paired samples with a discordant result, control viscose swabs were superior. The quality of colony growth on culture media for further investigations was superior with Previ Isola inoculated plates compared to manual plating techniques. Gram stain results were concordant between the two methods in 62/112 samples (55%). In 50/112 samples (45%), the results of Gram staining were discordant between the two methods. In 34 (68%) of the 50 paired samples with discordant results, Gram staining of PU swabs was superior to that of control viscose swabs. In 16 (32%) of the 50 paired samples, Gram staining of control viscose swabs was superior to that of PU swabs. We report the first clinical evaluation of Previ Isola automated specimen inoculation for wound swab samples. This study suggests that use of an automated specimen inoculation system has good results with regard to CFU recovery, quality of Gram staining, and accuracy of diagnosis. PMID:22692745
Development and Evaluation of a Measure of Library Automation.
ERIC Educational Resources Information Center
Pungitore, Verna L.
1986-01-01
Construct validity and reliability estimates indicate that study designed to measure utilization of automation in public and academic libraries was successful in tentatively identifying and measuring three subdimensions of level of automation: quality of hardware, method of software development, and number of automation specialists. Questionnaire…
Selecting automation for the clinical chemistry laboratory.
Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr
2007-07-01
Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.
Parnia, Sam; Nasir, Asad; Ahn, Anna; Malik, Hanan; Yang, Jie; Zhu, Jiawen; Dorazi, Francis; Richman, Paul
2014-04-01
A major hurdle limiting the ability to improve the quality of resuscitation has been the lack of a noninvasive real-time detection system capable of monitoring the quality of cerebral and other organ perfusion, as well as oxygen delivery during cardiopulmonary resuscitation. Here, we report on a novel system of cerebral perfusion targeted resuscitation. An observational study evaluating the role of cerebral oximetry (Equanox; Nonin, Plymouth, MI, and Invos; Covidien, Mansfield, MA) as a real-time marker of cerebral perfusion and oxygen delivery together with the impact of an automated mechanical chest compression system (Life Stat; Michigan Instruments, Grand Rapids, MI) on oxygen delivery and return of spontaneous circulation following in-hospital cardiac arrest. Tertiary medical center. In-hospital cardiac arrest patients (n = 34). Cerebral oximetry provided real-time information regarding the quality of perfusion and oxygen delivery. The use of automated mechanical chest compression device (n = 12) was associated with higher regional cerebral oxygen saturation compared with manual chest compression device (n = 22) (53.1% ± 23.4% vs 24% ± 25%, p = 0.002). There was a significant difference in mean regional cerebral oxygen saturation (median % ± interquartile range) in patients who achieved return of spontaneous circulation (n = 15) compared with those without return of spontaneous circulation (n = 19) (47.4% ± 21.4% vs 23% ± 18.42%, p < 0.001). After controlling for patients achieving return of spontaneous circulation or not, significantly higher mean regional cerebral oxygen saturation levels during cardiopulmonary resuscitation were observed in patients who were resuscitated using automated mechanical chest compression device (p < 0.001). The integration of cerebral oximetry into cardiac arrest resuscitation provides a novel noninvasive method to determine the quality of cerebral perfusion and oxygen delivery to the brain. The use of automated mechanical chest compression device during in-hospital cardiac arrest may lead to improved oxygen delivery and organ perfusion.
NASA Astrophysics Data System (ADS)
Barufaldi, Bruno; Lau, Kristen C.; Schiabel, Homero; Maidment, D. A.
2015-03-01
Routine performance of basic test procedures and dose measurements are essential for assuring high quality of mammograms. International guidelines recommend that breast care providers ascertain that mammography systems produce a constant high quality image, using as low a radiation dose as is reasonably achievable. The main purpose of this research is to develop a framework to monitor radiation dose and image quality in a mixed breast screening and diagnostic imaging environment using an automated tracking system. This study presents a module of this framework, consisting of a computerized system to measure the image quality of the American College of Radiology mammography accreditation phantom. The methods developed combine correlation approaches, matched filters, and data mining techniques. These methods have been used to analyze radiological images of the accreditation phantom. The classification of structures of interest is based upon reports produced by four trained readers. As previously reported, human observers demonstrate great variation in their analysis due to the subjectivity of human visual inspection. The software tool was trained with three sets of 60 phantom images in order to generate decision trees using the software WEKA (Waikato Environment for Knowledge Analysis). When tested with 240 images during the classification step, the tool correctly classified 88%, 99%, and 98%, of fibers, speck groups and masses, respectively. The variation between the computer classification and human reading was comparable to the variation between human readers. This computerized system not only automates the quality control procedure in mammography, but also decreases the subjectivity in the expert evaluation of the phantom images.
Transforming administrative data into real-time information in the Department of Surgery.
Beaulieu, Peter A; Higgins, John H; Dacey, Lawrence J; Nugent, William C; DeFoe, Gordon R; Likosky, Donald S
2010-10-01
Cardiothoracic surgical programmes face increasingly more complex procedures performed on evermore challenging patients. Public and private stakeholders are demanding these programmes report process-level and clinical outcomes as a mechanism for enabling quality assurance and informed clinical decision-making. Increasingly these measures are being tied to reimbursement and institutional accreditation. The authors developed a system for linking administrative and clinical registries, in real-time, to track performance in satisfying the needs of the patients and stakeholders, as well as helping to drive continuous quality improvement. A relational surgical database was developed to link prospectively collected clinical data to administrative data sources at Dartmouth-Hitchcock Medical Center. Institutional performance was displayed over time using process control charts, and compared with both internal and regional benchmarks. Quarterly reports have been generated and automated for five surgical cohorts. Data are displayed externally on our dedicated website, and internally in the cardiothoracic surgical office suites, operating room theatre and nursing units. Monthly discussions are held with the clinical staff and have resulted in the development of quality-improvement projects. The delivery of clinical care in isolation of data and information is no longer prudent or acceptable. The present study suggests that an automated and real-time computer system may provide rich sources of data that may be used to drive improvements in the quality of care. Current and future work will be focused on identifying opportunities to integrate these data into the fabric of the delivery of care to drive process improvement.
A semi-automated tool for treatment plan-quality evaluation and clinical trial quality assurance
NASA Astrophysics Data System (ADS)
Wang, Jiazhou; Chen, Wenzhou; Studenski, Matthew; Cui, Yunfeng; Lee, Andrew J.; Xiao, Ying
2013-07-01
The goal of this work is to develop a plan-quality evaluation program for clinical routine and multi-institutional clinical trials so that the overall evaluation efficiency is improved. In multi-institutional clinical trials evaluating the plan quality is a time-consuming and labor-intensive process. In this note, we present a semi-automated plan-quality evaluation program which combines MIMVista, Java/MATLAB, and extensible markup language (XML). More specifically, MIMVista is used for data visualization; Java and its powerful function library are implemented for calculating dosimetry parameters; and to improve the clarity of the index definitions, XML is applied. The accuracy and the efficiency of the program were evaluated by comparing the results of the program with the manually recorded results in two RTOG trials. A slight difference of about 0.2% in volume or 0.6 Gy in dose between the semi-automated program and manual recording was observed. According to the criteria of indices, there are minimal differences between the two methods. The evaluation time is reduced from 10-20 min to 2 min by applying the semi-automated plan-quality evaluation program.
Havel, Christof; Schreiber, Wolfgang; Trimmel, Helmut; Malzer, Reinhard; Haugk, Moritz; Richling, Nina; Riedmüller, Eva; Sterz, Fritz; Herkner, Harald
2010-01-01
Automated verbal and visual feedback improves quality of resuscitation in out-of-hospital cardiac arrest and was proven to increase short-term survival. Quality of resuscitation may be hampered in more difficult situations like emergency transportation. Currently there is no evidence if feedback devices can improve resuscitation quality during different modes of transportation. To assess the effect of real time automated feedback on the quality of resuscitation in an emergency transportation setting. Randomised cross-over trial. Medical University of Vienna, Vienna Municipal Ambulance Service and Helicopter Emergency Medical Service Unit (Christophorus Flugrettungsverein) in September 2007. European Resuscitation Council (ERC) certified health care professionals performing CPR in a flying helicopter and in a moving ambulance vehicle on a manikin with human-like chest properties. CPR sessions, with real time automated feedback as the intervention and standard CPR without feedback as control. Quality of chest compression during resuscitation. Feedback resulted in less deviation from ideal compression rate 100 min(-1) (9+/-9 min(-1), p<0.0001) with this effect becoming steadily larger over time. Applied work was less in the feedback group compared to controls (373+/-448 cm x compression; p<0.001). Feedback did not influence ideal compression depth significantly. There was some indication of a learning effect of the feedback device. Real time automated feedback improves certain aspects of CPR quality in flying helicopters and moving ambulance vehicles. The effect of feedback guidance was most pronounced for chest compression rate. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Hergeth, Sebastian; Lorenz, Lutz; Vilimek, Roman; Krems, Josef F
2016-05-01
The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving. © 2016, Human Factors and Ergonomics Society.
NASA Astrophysics Data System (ADS)
Sturtevant, C.; Hackley, S.; Lee, R.; Holling, G.; Bonarrigo, S.
2017-12-01
Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. Data quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from humans or the natural environment. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process heavily relying on visual inspection of data. In addition, notes of measurement interference are often recorded on paper without an explicit pathway to data flagging. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. We present a scalable QA/QC framework in development for NEON that combines the efficiency and standardization of automated checks with the power and flexibility of human review. This framework includes fast-response monitoring of sensor health, a mobile application for electronically recording maintenance activities, traditional point-based automated quality flagging, and continuous monitoring of quality outcomes and longer-term holistic evaluations. This framework maintains the traceability of quality information along the entirety of the data generation pipeline, and explicitly links field reports of measurement interference to quality flagging. Preliminary results show that data quality can be effectively monitored and managed for a multitude of sites with a small group of QA/QC staff. Several components of this framework are open-source, including a R-Shiny application for efficiently monitoring, synthesizing, and investigating data quality issues.
Roy, Somak; Durso, Mary Beth; Wald, Abigail; Nikiforov, Yuri E; Nikiforova, Marina N
2014-01-01
A wide repertoire of bioinformatics applications exist for next-generation sequencing data analysis; however, certain requirements of the clinical molecular laboratory limit their use: i) comprehensive report generation, ii) compatibility with existing laboratory information systems and computer operating system, iii) knowledgebase development, iv) quality management, and v) data security. SeqReporter is a web-based application developed using ASP.NET framework version 4.0. The client-side was designed using HTML5, CSS3, and Javascript. The server-side processing (VB.NET) relied on interaction with a customized SQL server 2008 R2 database. Overall, 104 cases (1062 variant calls) were analyzed by SeqReporter. Each variant call was classified into one of five report levels: i) known clinical significance, ii) uncertain clinical significance, iii) pending pathologists' review, iv) synonymous and deep intronic, and v) platform and panel-specific sequence errors. SeqReporter correctly annotated and classified 99.9% (859 of 860) of sequence variants, including 68.7% synonymous single-nucleotide variants, 28.3% nonsynonymous single-nucleotide variants, 1.7% insertions, and 1.3% deletions. One variant of potential clinical significance was re-classified after pathologist review. Laboratory information system-compatible clinical reports were generated automatically. SeqReporter also facilitated quality management activities. SeqReporter is an example of a customized and well-designed informatics solution to optimize and automate the downstream analysis of clinical next-generation sequencing data. We propose it as a model that may envisage the development of a comprehensive clinical informatics solution. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Elements of EAF automation processes
NASA Astrophysics Data System (ADS)
Ioana, A.; Constantin, N.; Dragna, E. C.
2017-01-01
Our article presents elements of Electric Arc Furnace (EAF) automation. So, we present and analyze detailed two automation schemes: the scheme of electrical EAF automation system; the scheme of thermic EAF automation system. The application results of these scheme of automation consists in: the sensitive reduction of specific consummation of electrical energy of Electric Arc Furnace, increasing the productivity of Electric Arc Furnace, increase the quality of the developed steel, increasing the durability of the building elements of Electric Arc Furnace.
MilxXplore: a web-based system to explore large imaging datasets.
Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J
2013-01-01
As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.
Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04
ERIC Educational Resources Information Center
Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.
2016-01-01
The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…
Automation of testing modules of controller ELSY-ТМК
NASA Astrophysics Data System (ADS)
Dolotov, A. E.; Dolotova, R. G.; Petuhov, D. V.; Potapova, A. P.
2017-01-01
In modern life, there are means for automation of various processes which allow one to provide high quality standards of released products and to raise labour efficiency. In the given paper, the data on the automation of the test process of the ELSY-TMK controller [1] is presented. The ELSY-TMK programmed logic controller is an effective modular platform for construction of automation systems for small and average branches of industrial production. The modern and functional standard of communication and open environment of the logic controller give a powerful tool of wide spectrum applications for industrial automation. The algorithm allows one to test controller modules by operating the switching system and external devices faster and at a higher level of quality than a human without such means does.
Novel, simple and fast automated synthesis of 18F-choline in a single Synthera module
NASA Astrophysics Data System (ADS)
Litman, Y.; Pace, P.; Silva, L.; Hormigo, C.; Caro, R.; Gutierrez, H.; Bastianello, M.; Casale, G.
2012-12-01
The aim of this work is to develop a method to produce 18F-Fluorocholine in a single Synthera module with high yield, quality and reproducibility. We give special importance to the details of the drying and distillation procedures. After 5 syntheses we report a decay corrected yield of (27 ± 2) % (mean ± S.D.). The radiochemical purity was > 95%, and the other quality control parameters were within the specifications. Product 18F-fluorocholine was administrated to 17 humans with no observed side-effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, K.W.
1994-12-01
This is one of a series of topical reports dealing with the strategic, technical, and market development of home automation. Particular emphasis is placed upon identifying those aspects of home automation that will impact the gas industry and gas products. Communication standards, market drivers, key organizations, technical implementation, product opportunities, and market growth projects will all be addressed in this or subsequent reports. These reports will also discuss how the gas industry and gas-fired equipment can use home automation technology to benefit the consumer.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Comparing clinical automated, medical record, and hybrid data sources for diabetes quality measures.
Kerr, Eve A; Smith, Dylan M; Hogan, Mary M; Krein, Sarah L; Pogach, Leonard; Hofer, Timothy P; Hayward, Rodney A
2002-10-01
Little is known about the relative reliability of medical record and clinical automated data, sources commonly used to assess diabetes quality of care. The agreement between diabetes quality measures constructed from clinical automated versus medical record data sources was compared, and the performance of hybrid measures derived from a combination of the two data sources was examined. Medical records were abstracted for 1,032 patients with diabetes who received care from 21 facilities in 4 Veterans Integrated Service Networks. Automated data were obtained from a central Veterans Health Administration diabetes registry containing information on laboratory tests and medication use. Success rates were higher for process measures derived from medical record data than from automated data, but no substantial differences among data sources were found for the intermediate outcome measures. Agreement for measures derived from the medical record compared with automated data was moderate for process measures but high for intermediate outcome measures. Hybrid measures yielded success rates similar to those of medical record-based measures but would have required about 50% fewer chart reviews. Agreement between medical record and automated data was generally high. Yet even in an integrated health care system with sophisticated information technology, automated data tended to underestimate the success rate in technical process measures for diabetes care and yielded different quartile performance rankings for facilities. Applying hybrid methodology yielded results consistent with the medical record but required less data to come from medical record reviews.
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use. PMID:26930204
Colometer: a real-time quality feedback system for screening colonoscopy.
Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N
2012-08-28
To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.
Automated measurement of cell motility and proliferation
Bahnson, Alfred; Athanassiou, Charalambos; Koebler, Douglas; Qian, Lei; Shun, Tongying; Shields, Donna; Yu, Hui; Wang, Hong; Goff, Julie; Cheng, Tao; Houck, Raymond; Cowsert, Lex
2005-01-01
Background Time-lapse microscopic imaging provides a powerful approach for following changes in cell phenotype over time. Visible responses of whole cells can yield insight into functional changes that underlie physiological processes in health and disease. For example, features of cell motility accompany molecular changes that are central to the immune response, to carcinogenesis and metastasis, to wound healing and tissue regeneration, and to the myriad developmental processes that generate an organism. Previously reported image processing methods for motility analysis required custom viewing devices and manual interactions that may introduce bias, that slow throughput, and that constrain the scope of experiments in terms of the number of treatment variables, time period of observation, replication and statistical options. Here we describe a fully automated system in which images are acquired 24/7 from 384 well plates and are automatically processed to yield high-content motility and morphological data. Results We have applied this technology to study the effects of different extracellular matrix compounds on human osteoblast-like cell lines to explore functional changes that may underlie processes involved in bone formation and maintenance. We show dose-response and kinetic data for induction of increased motility by laminin and collagen type I without significant effects on growth rate. Differential motility response was evident within 4 hours of plating cells; long-term responses differed depending upon cell type and surface coating. Average velocities were increased approximately 0.1 um/min by ten-fold increases in laminin coating concentration in some cases. Comparison with manual tracking demonstrated the accuracy of the automated method and highlighted the comparative imprecision of human tracking for analysis of cell motility data. Quality statistics are reported that associate with stage noise, interference by non-cell objects, and uncertainty in the outlining and positioning of cells by automated image analysis. Exponential growth, as monitored by total cell area, did not linearly correlate with absolute cell number, but proved valuable for selection of reliable tracking data and for disclosing between-experiment variations in cell growth. Conclusion These results demonstrate the applicability of a system that uses fully automated image acquisition and analysis to study cell motility and growth. Cellular motility response is determined in an unbiased and comparatively high throughput manner. Abundant ancillary data provide opportunities for uniform filtering according to criteria that select for biological relevance and for providing insight into features of system performance. Data quality measures have been developed that can serve as a basis for the design and quality control of experiments that are facilitated by automation and the 384 well plate format. This system is applicable to large-scale studies such as drug screening and research into effects of complex combinations of factors and matrices on cell phenotype. PMID:15831094
NASA Astrophysics Data System (ADS)
Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna
2018-06-01
Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.
Upgrades to the NOAA/NESDIS automated Cloud-Motion Vector system
NASA Technical Reports Server (NTRS)
Nieman, Steve; Menzel, W. Paul; Hayden, Christopher M.; Wanzong, Steve; Velden, Christopher S.
1993-01-01
The latest version of the automated cloud motion vector software has yielded significant improvements in the quality of the GOES cloud-drift winds produced operationally by NESDIS. Cloud motion vectors resulting from the automated system are now equal or superior in quality to those which had the benefit of manual quality control a few years ago. The single most important factor in this improvement has been the upgraded auto-editor. Improved tracer selection procedures eliminate targets in difficult regions and allow a higher target density and therefore enhanced coverage in areas of interest. The incorporation of the H2O-intercept height assignment method allows an adequate representation of the heights of semi-transparent clouds in the absence of a CO2-absorption channel. Finally, GOES-8 water-vapor motion winds resulting from the automated system are superior to any done previously by NESDIS and should now be considered as an operational product.
Improvement of Computer Software Quality through Software Automated Tools.
1986-08-31
requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry
1987-06-01
commercial products. · OP -- Typical cutout at a plumbiinc location where an automated monitoring system has bv :• installed. The sensor used with the...This report provides a description of commercially available sensors , instruments, and ADP equipment that may be selected to fully automate...automated. The automated plumbline monitoring system includes up to twelve sensors , repeaters, a system controller, and a printer. The system may
Van Eaton, Erik G; Devlin, Allison B; Devine, Emily Beth; Flum, David R; Tarczy-Hornoch, Peter
2014-01-01
Delivering more appropriate, safer, and highly effective health care is the goal of a learning health care system. The Agency for Healthcare Research and Quality (AHRQ) funded enhanced registry projects: (1) to create and analyze valid data for comparative effectiveness research (CER); and (2) to enhance the ability to monitor and advance clinical quality improvement (QI). This case report describes barriers and solutions from one state-wide enhanced registry project. The Comparative Effectiveness Research and Translation Network (CERTAIN) deployed the commercially available Amalga Unified Intelligence System™ (Amalga) as a central data repository to enhance an existing QI registry (the Automation Project). An eight-step implementation process included hospital recruitment, technical electronic health record (EHR) review, hospital-specific interface planning, data ingestion, and validation. Data ownership and security protocols were established, along with formal methods to separate data management for QI purposes and research purposes. Sustainability would come from lowered chart review costs and the hospital's desire to invest in the infrastructure after trying it. CERTAIN approached 19 hospitals in Washington State operating within 12 unaffiliated health care systems for the Automation Project. Five of the 19 completed all implementation steps. Four hospitals did not participate due to lack of perceived institutional value. Ten hospitals did not participate because their information technology (IT) departments were oversubscribed (e.g., too busy with Meaningful Use upgrades). One organization representing 22 additional hospitals expressed interest, but was unable to overcome data governance barriers in time. Questions about data use for QI versus research were resolved in a widely adopted project framework. Hospitals restricted data delivery to a subset of patients, introducing substantial technical challenges. Overcoming challenges of idiosyncratic EHR implementations required each hospital to devote more IT resources than were predicted. Cost savings did not meet projections because of the increased IT resource requirements and a different source of lowered chart review costs. CERTAIN succeeded in recruiting unaffiliated hospitals into the Automation Project to create an enhanced registry to achieve AHRQ goals. This case report describes several distinct barriers to central data aggregation for QI and CER across unaffiliated hospitals: (1) competition for limited on-site IT expertise, (2) concerns about data use for QI versus research, (3) restrictions on data automation to a defined subset of patients, and (4) unpredictable resource needs because of idiosyncrasies among unaffiliated hospitals in how EHR data are coded, stored, and made available for transmission-even between hospitals using the same vendor's EHR. Therefore, even a fully optimized automation infrastructure would still not achieve complete automation. The Automation Project was unable to align sufficiently with internal hospital objectives, so it could not show a compelling case for sustainability.
Achieving and Sustaining Automated Health Data Linkages for Learning Systems: Barriers and Solutions
Van Eaton, Erik G.; Devlin, Allison B.; Devine, Emily Beth; Flum, David R.; Tarczy-Hornoch, Peter
2014-01-01
Introduction: Delivering more appropriate, safer, and highly effective health care is the goal of a learning health care system. The Agency for Healthcare Research and Quality (AHRQ) funded enhanced registry projects: (1) to create and analyze valid data for comparative effectiveness research (CER); and (2) to enhance the ability to monitor and advance clinical quality improvement (QI). This case report describes barriers and solutions from one state-wide enhanced registry project. Methods: The Comparative Effectiveness Research and Translation Network (CERTAIN) deployed the commercially available Amalga Unified Intelligence System™ (Amalga) as a central data repository to enhance an existing QI registry (the Automation Project). An eight-step implementation process included hospital recruitment, technical electronic health record (EHR) review, hospital-specific interface planning, data ingestion, and validation. Data ownership and security protocols were established, along with formal methods to separate data management for QI purposes and research purposes. Sustainability would come from lowered chart review costs and the hospital’s desire to invest in the infrastructure after trying it. Findings: CERTAIN approached 19 hospitals in Washington State operating within 12 unaffiliated health care systems for the Automation Project. Five of the 19 completed all implementation steps. Four hospitals did not participate due to lack of perceived institutional value. Ten hospitals did not participate because their information technology (IT) departments were oversubscribed (e.g., too busy with Meaningful Use upgrades). One organization representing 22 additional hospitals expressed interest, but was unable to overcome data governance barriers in time. Questions about data use for QI versus research were resolved in a widely adopted project framework. Hospitals restricted data delivery to a subset of patients, introducing substantial technical challenges. Overcoming challenges of idiosyncratic EHR implementations required each hospital to devote more IT resources than were predicted. Cost savings did not meet projections because of the increased IT resource requirements and a different source of lowered chart review costs. Discussion: CERTAIN succeeded in recruiting unaffiliated hospitals into the Automation Project to create an enhanced registry to achieve AHRQ goals. This case report describes several distinct barriers to central data aggregation for QI and CER across unaffiliated hospitals: (1) competition for limited on-site IT expertise, (2) concerns about data use for QI versus research, (3) restrictions on data automation to a defined subset of patients, and (4) unpredictable resource needs because of idiosyncrasies among unaffiliated hospitals in how EHR data are coded, stored, and made available for transmission—even between hospitals using the same vendor’s EHR. Therefore, even a fully optimized automation infrastructure would still not achieve complete automation. The Automation Project was unable to align sufficiently with internal hospital objectives, so it could not show a compelling case for sustainability. PMID:25848606
Fisch, Clifford B.; Fisch, Martin L.
1979-01-01
The Stanley S. Lamm Institute for Developmental Disabilities of The Long Island College Hospital, in conjunction with Micro-Med Systems has developed a low cost micro-computer based information system (ADDOP TRS) which monitors quality of care in outpatient settings rendering services to the developmentally disabled population. The process of conversion from paper record keeping systems to direct key-to-disk data capture at the point of service delivery is described. Data elements of the information system including identifying patient information, coded and English-grammar entry procedures for tracking elements of service as well as their delivery status are described. Project evaluation criteria are defined including improved quality of care, improved productivity for clerical and professional staff and enhanced decision making capability. These criteria are achieved in a cost effective manner as a function of more efficient information flow. Administrative applications including staff/budgeting procedures, submissions for third party reimbursement and case reporting to utilization review committees are considered.
The Implementation of an Automated Assessment Feedback and Quality Assurance System for ICT Courses
ERIC Educational Resources Information Center
Debuse, J.; Lawley, M.; Shibl, R.
2007-01-01
Providing detailed, constructive and helpful feedback is an important contribution to effective student learning. Quality assurance is also required to ensure consistency across all students and reduce error rates. However, with increasing workloads and student numbers these goals are becoming more difficult to achieve. An automated feedback…
Automated quality control in a file-based broadcasting workflow
NASA Astrophysics Data System (ADS)
Zhang, Lina
2014-04-01
Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, J; Christianson, O; Samei, E
Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.
2016-03-01
We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.
Linking quality indicators to clinical trials: an automated approach
Coiera, Enrico; Choong, Miew Keen; Tsafnat, Guy; Hibbert, Peter; Runciman, William B.
2017-01-01
Abstract Objective Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. Design A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. Intervention Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. Main Outcome Measure Number of care indicators that mapped to outcome measures in clinical trials. Results While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. Conclusions We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence. PMID:28651340
JPRS Report, Science & Technology, USSR: Life Sciences.
1989-02-01
Cytokinins in Transgenic Nicotiana Tabacum Plants [V. M. Zakharyev, A. Sh. Tashpulatov, et al.; DOKLADY AKADEMIINAUK SSSR, Vol 301 No 3, Jul 88] 18...and flowers . Objective evaluation of the quality of plant materials is impossible without the use of modern immu- nodiagnosis methods which...deriving diagnostic antisera. The diagnostics are automated. Pure viral antigens for potatoes and a number of other crops have been developed on the
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
Botsis, Taxiarchis; Foster, Matthew; Arya, Nina; Kreimeyer, Kory; Pandey, Abhishek; Arya, Deepa
2017-04-26
To evaluate the feasibility of automated dose and adverse event information retrieval in supporting the identification of safety patterns. We extracted all rabbit Anti-Thymocyte Globulin (rATG) reports submitted to the United States Food and Drug Administration Adverse Event Reporting System (FAERS) from the product's initial licensure in April 16, 1984 through February 8, 2016. We processed the narratives using the Medication Extraction (MedEx) and the Event-based Text-mining of Health Electronic Records (ETHER) systems and retrieved the appropriate medication, clinical, and temporal information. When necessary, the extracted information was manually curated. This process resulted in a high quality dataset that was analyzed with the Pattern-based and Advanced Network Analyzer for Clinical Evaluation and Assessment (PANACEA) to explore the association of rATG dosing with post-transplant lymphoproliferative disorder (PTLD). Although manual curation was necessary to improve the data quality, MedEx and ETHER supported the extraction of the appropriate information. We created a final dataset of 1,380 cases with complete information for rATG dosing and date of administration. Analysis in PANACEA found that PTLD was associated with cumulative doses of rATG >8 mg/kg, even in periods where most of the submissions to FAERS reported low doses of rATG. We demonstrated the feasibility of investigating a dose-related safety pattern for a particular product in FAERS using a set of automated tools.
Continuous integration and quality control for scientific software
NASA Astrophysics Data System (ADS)
Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.
2013-08-01
Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.
Flip the tip: an automated, high quality, cost-effective patch clamp screen.
Lepple-Wienhues, Albrecht; Ferlinz, Klaus; Seeger, Achim; Schäfer, Arvid
2003-01-01
The race for creating an automated patch clamp has begun. Here, we present a novel technology to produce true gigaseals and whole cell preparations at a high rate. Suspended cells are flushed toward the tip of glass micropipettes. Seal, whole-cell break-in, and pipette/liquid handling are fully automated. Extremely stable seals and access resistance guarantee high recording quality. Data obtained from different cell types sealed inside pipettes show long-term stability, voltage clamp and seal quality, as well as block by compounds in the pM range. A flexible array of independent electrode positions minimizes consumables consumption at maximal throughput. Pulled micropipettes guarantee a proven gigaseal substrate with ultra clean and smooth surface at low cost.
Database Performance Monitoring for the Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website.more » To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.« less
Automation and the Federal Library Community: Report on a Survey.
ERIC Educational Resources Information Center
Henderson, Madeline; Geddes, Susan
A survey of the status of the federal library community and its involvement with automation was undertaken; the results are summarized in this report. The study sought to define which library operations were susceptible to automation, to describe potentially useful automation techniques and to establish criteria for decisions about automation.…
Quantitative Analysis and Stability of the Rodenticide TETS ...
Journal Article The determination of the rodenticide tetramethylenedisulfotetramine (TETS) in drinking water is reportable through the use of automated sample preparation via solid phase extraction and detection using isotope dilution gas chromatography-mass spectrometry. The method was characterized over twenty-two analytical batches with quality control samples. Accuracies for low and high concentration quality control pools were 100 and 101%, respectively. The minimum reporting level (MRL) for TETS in this method is 0.50 ug/L. Five drinking waters representing a range of water quality parameters and disinfection practices were fortified with TETS at ten times the MRL and analyzed over a 28 day period to determine the stability of TETS in these waters. The amount of TETS measured in these samples averaged 100 ± 6% of the amount fortified suggesting that tap water samples may be held for up to 28 days prior to analysis.
Archuleta, Christy-Ann M.; Gonzales, Sophia L.; Maltby, David R.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the Texas Commission on Environmental Quality, developed computer scripts and applications to automate the delineation of watershed boundaries and compute watershed characteristics for more than 3,000 surface-water-quality monitoring stations in Texas that were active during 2010. Microsoft Visual Basic applications were developed using ArcGIS ArcObjects to format the source input data required to delineate watershed boundaries. Several automated scripts and tools were developed or used to calculate watershed characteristics using Python, Microsoft Visual Basic, and the RivEX tool. Automated methods were augmented by the use of manual methods, including those done using ArcMap software. Watershed boundaries delineated for the monitoring stations are limited to the extent of the Subbasin boundaries in the USGS Watershed Boundary Dataset, which may not include the total watershed boundary from the monitoring station to the headwaters.
Automated Cognitive Health Assessment Using Smart Home Monitoring of Complex Tasks
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen
2014-01-01
One of the many services that intelligent systems can provide is the automated assessment of resident well-being. We hypothesize that the functional health of individuals, or ability of individuals to perform activities independently without assistance, can be estimated by tracking their activities using smart home technologies. In this paper, we introduce a machine learning-based method for assessing activity quality in smart homes. To validate our approach we quantify activity quality for 179 volunteer participants who performed a complex, interweaved set of activities in our smart home apartment. We observed a statistically significant correlation (r=0.79) between automated assessment of task quality and direct observation scores. Using machine learning techniques to predict the cognitive health of the participants based on task quality is accomplished with an AUC value of 0.64. We believe that this capability is an important step in understanding everyday functional health of individuals in their home environments. PMID:25530925
Automated Cognitive Health Assessment Using Smart Home Monitoring of Complex Tasks.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen
2013-11-01
One of the many services that intelligent systems can provide is the automated assessment of resident well-being. We hypothesize that the functional health of individuals, or ability of individuals to perform activities independently without assistance, can be estimated by tracking their activities using smart home technologies. In this paper, we introduce a machine learning-based method for assessing activity quality in smart homes. To validate our approach we quantify activity quality for 179 volunteer participants who performed a complex, interweaved set of activities in our smart home apartment. We observed a statistically significant correlation (r=0.79) between automated assessment of task quality and direct observation scores. Using machine learning techniques to predict the cognitive health of the participants based on task quality is accomplished with an AUC value of 0.64. We believe that this capability is an important step in understanding everyday functional health of individuals in their home environments.
The role of optical flow in automated quality assessment of full-motion video
NASA Astrophysics Data System (ADS)
Harguess, Josh; Shafer, Scott; Marez, Diego
2017-09-01
In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.
MilxXplore: a web-based system to explore large imaging datasets
Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J
2013-01-01
Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173
Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images.
Lee, Kyungmoo; Buitendijk, Gabriëlle H S; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R; Klaver, Caroline C W; Abràmoff, Michael D
2016-03-01
To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm 3 ) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC ( P < 0.01). The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies.
Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.
Ialongo, Cristiano; Bernardini, Sergio
2016-01-01
Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
JPRS Report. Science & Technology, USSR: Engineering & Equipment.
1988-12-19
unlimited Science & Technology USSR: Engineering & Equipment ^PRODUCED BY ^J’ODNALTECSL OF AMERCE SPR/NGnEL^ff’^1-INFORMATION S 22161 SERVICE...rv> DTIC QUALITY mSHBOTSD j5 Science & Technology USSR: Engineering & Equipment JPRS-UEQ-88-006 CONTENTS 19 DECEMBER 1988 Nuclear Energy Fuel...PROMYSHLENNOST, No 4, Apr 88] 36 Determining the Demand for Automated Foundry Equipment [A.A. Panov; MEKHAN1ZATS1YA IAVTOMATIZATSIYA PROIZVODSTVA, Apr 88] 40
1989-10-01
weight based on how powerful the corresponding feature is for object recognition and discrimination. For example, consider an arbitrary weight, denoted...quality of the segmentation, how powerful the features and spatial constraints in the knowledge base are (as far as object recognition is concern...that are powerful for object recognition and discrimination. At this point, this selection is performed heuristically through trial-and-error. As a
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael
2016-09-01
Binary cues help operators perform binary categorization tasks, such as monitoring for system failures. They may also allow them to attend to other tasks they concurrently perform. If the time saved by using cues is allocated to other concurrent tasks, users' overall effort may remain unchanged. In 2 experiments, participants performed a simulated quality control task, together with a tracking task. In half the experimental blocks cues were available, and participants could use them in their decisions about the quality of products (intact or faulty). In Experiment 1, the difficulty of tracking was constant, while in Experiment 2, tracking difficulty differed in the 2 halves of the experiment. In both experiments, participants reported on the NASA Task Load Index that cues improved their performance and reduced their frustration. Consequently, their overall score on mental workload (MWL) was lower with cues. They also reported, however, that cues did not reduce their effort. We conclude that cues and other forms of automation may support task performance and reduce overall MWL, but this will not necessarily mean that users will work less hard. Thus, effort and overall MWL should be evaluated separately, if one wants to obtain a full picture of the effects of automation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
1981-06-30
manpower needs as to quantity, quality and timing; all the internal functions of the personnel service are tapped to help meet these ends. Manpower...Program ACOS - Automated Computation of Service ACQ - Acquisition ACSAC - Assistant Chief of Staff for Automation and Comunications ACT - Automated...ARSTAF - Army Staff ARSTAFF - Army Staff ARTEP - Army Training and Evaluation Program ASI - Additional Skill Identifier ASVAB - Armed Services
Automated Assessment of the Quality of Depression Websites
Tang, Thanh Tin; Hawking, David; Christensen, Helen
2005-01-01
Background Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. Objective This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. Method The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health’s guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. Results The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, P<.001). Addition of a quadratic component improved the fit, the combined linear and quadratic model explaining 82 percent of the variance. The correlation between Google PageRank and the evidence-based score was lower than that for the AQA. When sites with zero PageRanks were included the association was weak and non-significant (r=0.23, P=.22). When sites with zero PageRanks were excluded, the correlation was moderate (r=.61, P=.002). Conclusions Depression websites of different evidence-based quality can be differentiated using an automated system. If replicable, generalizable to other health conditions and deployed in a consumer-friendly form, the automated procedure described here could represent an important advance for consumers of Internet medical information. PMID:16403723
Automation: how much is too much?
Hancock, P A
2014-01-01
The headlong rush to automate continues apace. The dominant question still remains whether we can automate, not whether we should automate. However, it is this latter question that is featured and considered explicitly here. The suggestion offered is that unlimited automation of all technical functions will eventually prove anathema to the fundamental quality of human life. Examples of tasks, pursuits and past-times that should potentially be excused from the automation imperative are discussed. This deliberation leads us back to the question of balance in the cooperation, coordination and potential conflict between humans and the machines they create.
Systematic review of studies of staffing and quality in nursing homes.
Bostick, Jane E; Rantz, Marilyn J; Flesner, Marcia K; Riggs, C Jo
2006-07-01
To evaluate a range of staffing measures and data sources for long-term use in public reporting of staffing as a quality measure in nursing homes. Eighty-seven research articles and government documents published from 1975 to 2003 were reviewed and summarized. Relevant content was extracted and organized around 3 themes: staffing measures, quality measures, and risk adjustment variables. Data sources for staffing information were also identified. There is a proven association between higher total staffing levels (especially licensed staff) and improved quality of care. Studies also indicate a significant relationship between high turnover and poor resident outcomes. Functional ability, pressure ulcers, and weight loss are the most sensitive quality indicators linked to staffing. The best national data sources for staffing and quality include the Minimum Data Set (MDS) and On-line Survey and Certification Automated Records (OSCAR). However, the accuracy of this self-reported information requires further reliability and validity testing. A nationwide instrument needs to be developed to accurately measure staff turnover. Large-scale studies using payroll data to measure staff retention and its impact on resident outcomes are recommended. Future research should use the most nurse-sensitive quality indicators such as pressure ulcers, functional status, and weight loss.
Precision and Disclosure in Text and Voice Interviews on Smartphones
Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L.; Johnston, Michael; Vickers, Lucas; Yan, H. Yanna; Zhang, Chan
2015-01-01
As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. PMID:26060991
Precision and Disclosure in Text and Voice Interviews on Smartphones.
Schober, Michael F; Conrad, Frederick G; Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L; Johnston, Michael; Vickers, Lucas; Yan, H Yanna; Zhang, Chan
2015-01-01
As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.
Deans, Katherine J; Minneci, Peter C; Nacion, Kristine M; Leonhart, Karen; Cooper, Jennifer N; Scholle, Sarah Hudson; Kelleher, Kelly J
2018-02-22
Preventive quality measures for the foster care population are largely untested. The objective of the study is to identify healthcare quality measures for young children and adolescents in foster care and to test whether the data required to calculate these measures can be feasibly extracted and interpreted within an electronic health records or within the Statewide Automated Child Welfare Information System. The AAP Recommendations for Preventive Pediatric Health Care served as the guideline for determining quality measures. Quality measures related to well child visits, developmental screenings, immunizations, trauma-related care, BMI measurements, sexually transmitted infections and depression were defined. Retrospective chart reviews were performed on a cohort of children in foster care from a single large pediatric institution and related county. Data available in the Ohio Statewide Automated Child Welfare Information System was compared to the same population studied in the electronic health record review. Quality measures were calculated as observed (received) to expected (recommended) ratios (O/E ratios) to describe the actual quantity of recommended health care that was received by individual children. Electronic health records and the Statewide Automated Child Welfare Information System data frequently lacked important information on foster care youth essential for calculating the measures. Although electronic health records were rich in encounter specific clinical data, they often lacked custodial information such as the dates of entry into and exit from foster care. In contrast, Statewide Automated Child Welfare Information System included robust data on custodial arrangements, but lacked detailed medical information. Despite these limitations, several quality measures were devised that attempted to accommodate these limitations. In this feasibility testing, neither the electronic health records at a single institution nor the county level Statewide Automated Child Welfare Information System was able to independently serve as a reliable source of data for health care quality measures for foster care youth. However, the ability to leverage both sources by matching them at an individual level may provide the complement of data necessary to assess the quality of healthcare.
ERIC Educational Resources Information Center
Rupp, André A.
2018-01-01
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Classification Trees for Quality Control Processes in Automated Constructed Response Scoring.
ERIC Educational Resources Information Center
Williamson, David M.; Hone, Anne S.; Miller, Susan; Bejar, Isaac I.
As the automated scoring of constructed responses reaches operational status, the issue of monitoring the scoring process becomes a primary concern, particularly when the goal is to have automated scoring operate completely unassisted by humans. Using a vignette from the Architectural Registration Examination and data for 326 cases with both human…
Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica
2013-01-01
Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403
Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica
2016-02-01
Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.
Feasibility of Developing a Protocol for Automated Protist Analysis
2010-03-01
Acquisition Directorate Research & Development Center Report No: CG-D-02-ll Feasibility of Developing a Protocol for Automated Protist Analysis...Technical Information Service, Springfield, VA 22161. March 2010 Homeland Security Feasibility of Developing a Protocol for Automated Protist ...March 21)10 Feasibility of Developing a Protocol for Automated Protist Analysis 00 00 o CM Technical Report Documentation Page 1. Report No CG-D
2015-05-01
Director, Operational Test and Evaluation Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial...Operational Test and Evaluation Report May 2015 This report on the Department of Defense (DOD) Automated Biometric Identification System...COVERED - 4. TITLE AND SUBTITLE Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial Operational Test
Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David
2018-04-01
Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
A Program to Improve the Triangulated Surface Mesh Quality Along Aircraft Component Intersections
NASA Technical Reports Server (NTRS)
Cliff, Susan E.
2005-01-01
A computer program has been developed for improving the quality of unstructured triangulated surface meshes in the vicinity of component intersections. The method relies solely on point removal and edge swapping for improving the triangulations. It can be applied to any lifting surface component such as a wing, canard or horizontal tail component intersected with a fuselage, or it can be applied to a pylon that is intersected with a wing, fuselage or nacelle. The lifting surfaces or pylon are assumed to be aligned in the axial direction with closed trailing edges. The method currently maintains salient edges only at leading and trailing edges of the wing or pylon component. This method should work well for any shape of fuselage that is free of salient edges at the intersection. The method has been successfully demonstrated on a total of 125 different test cases that include both blunt and sharp wing leading edges. The code is targeted for use in the automated environment of numerical optimization where geometric perturbations to individual components can be critical to the aerodynamic performance of a vehicle. Histograms of triangle aspect ratios are reported to assess the quality of the triangles attached to the intersection curves before and after application of the program. Large improvements to the quality of the triangulations were obtained for the 125 test cases; the quality was sufficient for use with an automated tetrahedral mesh generation program that is used as part of an aerodynamic shape optimization method.
Crisp, Dimity; Griffiths, Kathleen; Mackinnon, Andrew; Bennett, Kylie; Christensen, Helen
2014-04-30
Internet-based interventions are increasingly recognized as effective for the treatment and prevention of depression; however, there is a paucity of research investigating potential secondary benefits. From a consumer perspective, improvements in indicators of wellbeing such as perceived quality of life may represent the most important outcomes for evaluating the effectiveness of an intervention. This study investigated the 'secondary' benefits for self-esteem, empowerment, quality of life and perceived social support of two 12-week online depression interventions when delivered alone and in combination. Participants comprised 298 adults displaying elevated psychological distress. Participants were randomised to receive: an Internet Support Group (ISG); an automated Internet psycho-educational training program for depression; a combination of these conditions; or a control website. Analyses were performed on an intent-to-treat basis. Following the automated training program immediate improvements were shown in participants׳ self-esteem and empowerment relative to control participants. Improvements in perceived quality of life were reported 6-months following the completion of the intervention when combined with an ISG. These findings provide initial evidence for the effectiveness of this online intervention for improving individual wellbeing beyond the primary aim of the treatment. However, further research is required to investigate the mechanisms underlying improvement in these secondary outcomes. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wilson, Joshua
2017-01-01
The present study examined growth in writing quality associated with feedback provided by an automated essay evaluation system called PEG Writing. Equal numbers of students with disabilities (SWD) and typically-developing students (TD) matched on prior writing achievement were sampled (n = 1196 total). Data from a subsample of students (n = 655)…
The Automation of Nowcast Model Assessment Processes
2016-09-01
that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fittipaldi, J.J.; Sliwinski, B.J.
1991-06-01
Army environmental planning and compliance activities continue to grow in magnitude and complexity, straining the resources of installation environmental offices. New efficiencies must be found to meet the increasing demands of planning and compliance imperatives. This study examined how office automation/information technology (OA/IT) may boost productivity in U.S. Army Training and Doctrine Command (TRADOC) installation environmental offices between now and the year 2000. A survey of four TRADOC installation environmental offices revealed that the workload often exceeds the capacity of staff. Computer literacy among personnel varies widely, limiting the benefits available from OA/IT now in use. Since environmental personnel aremore » primarily gatherers and processors of information, better implementation of OA/IT could substantially improve work quality and productivity. Advanced technologies expected to reach the consumer market during the 1990s will dramatically increase the potential productivity of environmental office personnel. Multitasking operating environments will allow simultaneous automation of communications, document processing, and engineering software. Increased processor power and parallel processing techniques will spur simplification of the user interface and greater software capabilities in general. The authors conclude that full implementation of this report's OA/IT recommendations could double TRADOC environmental office productivity by the year 2000.« less
NASA Astrophysics Data System (ADS)
Christianson, D. S.; Beekwilder, N.; Chan, S.; Cheah, Y. W.; Chu, H.; Dengel, S.; O'Brien, F.; Pastorello, G.; Sandesh, M.; Torn, M. S.; Agarwal, D.
2017-12-01
AmeriFlux is a network of scientists who independently collect eddy covariance and related environmental observations at over 250 locations across the Americas. As part of the AmeriFlux Management Project, the AmeriFlux Data Team manages standardization, collection, quality assurance / quality control (QA/QC), and distribution of data submitted by network members. To generate data products that are timely, QA/QC'd, and repeatable, and have traceable provenance, we developed a semi-automated data processing pipeline. The new pipeline consists of semi-automated format and data QA/QC checks. Results are communicated via on-line reports as well as an issue-tracking system. Data processing time has been reduced from 2-3 days to a few hours of manual review time, resulting in faster data availability from the time of data submission. The pipeline is scalable to the network level and has the following key features. (1) On-line results of the format QA/QC checks are available immediately for data provider review. This enables data providers to correct and resubmit data quickly. (2) The format QA/QC assessment includes an automated attempt to fix minor format errors. Data submissions that are formatted in the new AmeriFlux FP-In standard can be queued for the data QA/QC assessment, often with minimal delay. (3) Automated data QA/QC checks identify and communicate potentially erroneous data via online, graphical quick views that highlight observations with unexpected values, incorrect units, time drifts, invalid multivariate correlations, and/or radiation shadows. (4) Progress through the pipeline is integrated with an issue-tracking system that facilitates communications between data providers and the data processing team in an organized and searchable fashion. Through development of these and other features of the pipeline, we present solutions to challenges that include optimizing automated with manual processing, bridging legacy data management infrastructure with various software tools, and working across interdisciplinary and international science cultures. Additionally, we discuss results from community member feedback that helped refine QA/QC communications for efficient data submission and revision.
30 CFR 227.601 - What are a State's responsibilities if it performs automated verification?
Code of Federal Regulations, 2010 CFR
2010-07-01
... performs automated verification? 227.601 Section 227.601 Mineral Resources MINERALS MANAGEMENT SERVICE... Perform Delegated Functions § 227.601 What are a State's responsibilities if it performs automated verification? To perform automated verification of production reports or royalty reports, you must: (a) Verify...
Automated Quality of Care Evaluation Support System (AQCESS): AQCESS Functional Description
1985-12-04
to greater awareness of and increased expec- tations from the health care field. During the oast several years , incidents of improper or questionable...great deal of time and effort must be spent by OA personnel culling from a large volume of paper the salient information about what problems each...center around manual, labor- and paper -intensive reviews of medical records, hospital incident reports, committee actions and follow-ups, recurring
Extensibility Experiments with the Software Life-Cycle Support Environment
1991-11-01
APRICOT ) and Bit- Oriented Message Definer (BMD); and three from the Ada Software Repository (ASR) at White Sands-the NASA/Goddard Space Flight Center...Graphical Kernel System (GKS). c. AMS - The Automated Measurement System tool supports the definition, collec- tion, and reporting of quality metric...Ada Primitive Order Compilation Order Tool ( APRICOT ) 2. Bit-Oriented Message Definer (BMD) 3. LGEN: A Language Generator Tool 4. I"ilc Chc-ker 5
Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J
2013-10-01
To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.
Automated MAD and MIR structure solution
Terwilliger, Thomas C.; Berendzen, Joel
1999-01-01
Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations. PMID:10089316
Benn, D K; Minden, N J; Pettigrew, J C; Shim, M
1994-08-01
President Clinton's Health Security Act proposes the formation of large scale health plans with improved quality assurance. Dental radiography consumes 4% ($1.2 billion in 1990) of total dental expenditure yet regular systematic office quality assurance is not performed. A pilot automated method is described for assessing density of exposed film and fogging of unexposed processed film. A workstation and camera were used to input intraoral radiographs. Test images were produced from a phantom jaw with increasing exposure times. Two radiologists subjectively classified the images as too light, acceptable, or too dark. A computer program automatically classified global grey level histograms from the test images as too light, acceptable, or too dark. The program correctly classified 95% of 88 clinical films. Optical density of unexposed film in the range 0.15 to 0.52 measured by computer was reliable to better than 0.01. Further work is needed to see if comprehensive centralized automated radiographic quality assurance systems with feedback to dentists are feasible, are able to improve quality, and are significantly cheaper than conventional clerical methods.
Sarkozi, Laszlo; Simson, Elkin; Ramanathan, Lakshmi
2003-03-01
Thirty-six years of data and history of laboratory practice at our institution has enabled us to follow the effects of analytical automation, then recently pre-analytical and post-analytical automation on productivity, cost reduction and enhanced quality of service. In 1998, we began the operation of a pre- and post-analytical automation system (robotics), together with an advanced laboratory information system to process specimens prior to analysis, deliver them to various automated analytical instruments, specimen outlet racks and finally to refrigerated stockyards. By the end of 3 years of continuous operation, we compared the chemistry part of the system with the prior 33 years and quantitated the financial impact of the various stages of automation. Between 1965 and 2000, the Consumer Price Index increased by a factor of 5.5 in the United States. During the same 36 years, at our institution's Chemistry Department the productivity (indicated as the number of reported test results/employee/year) increased from 10,600 to 104,558 (9.3-fold). When expressed in constant 1965 dollars, the total cost per test decreased from 0.79 dollars to 0.15 dollars. Turnaround time for availability of results on patient units decreased to the extent that Stat specimens requiring a turnaround time of <1 h do not need to be separately prepared or prioritized on the system. Our experience shows that the introduction of a robotics system for perianalytical automation has brought a large improvement in productivity together with decreased operational cost. It enabled us to significantly increase our workload together with a reduction of personnel. In addition, stats are handled easily and there are benefits such as safer working conditions and improved sample identification, which are difficult to quantify at this stage.
Performance of Copan WASP for Routine Urine Microbiology
Quiblier, Chantal; Jetter, Marion; Rominski, Mark; Mouttet, Forouhar; Böttger, Erik C.; Keller, Peter M.
2015-01-01
This study compared a manual workup of urine clinical samples with fully automated WASPLab processing. As a first step, two different inocula (1 and 10 μl) and different streaking patterns were compared using WASP and InoqulA BT instrumentation. Significantly more single colonies were produced with the10-μl inoculum than with the 1-μl inoculum, and automated streaking yielded significantly more single colonies than manual streaking on whole plates (P < 0.001). In a second step, 379 clinical urine samples were evaluated using WASP and the manual workup. Average numbers of detected morphologies, recovered species, and CFUs per milliliter of all 379 urine samples showed excellent agreement between WASPLab and the manual workup. The percentage of urine samples clinically categorized as positive or negative did not differ between the automated and manual workflow, but within the positive samples, automated processing by WASPLab resulted in the detection of more potential pathogens. In summary, the present study demonstrates that (i) the streaking pattern, i.e., primarily the number of zigzags/length of streaking lines, is critical for optimizing the number of single colonies yielded from primary cultures of urine samples; (ii) automated streaking by the WASP instrument is superior to manual streaking regarding the number of single colonies yielded (for 32.2% of the samples); and (iii) automated streaking leads to higher numbers of detected morphologies (for 47.5% of the samples), species (for 17.4% of the samples), and pathogens (for 3.4% of the samples). The results of this study point to an improved quality of microbiological analyses and laboratory reports when using automated sample processing by WASP and WASPLab. PMID:26677255
The automated system for technological process of spacecraft's waveguide paths soldering
NASA Astrophysics Data System (ADS)
Tynchenko, V. S.; Murygin, A. V.; Emilova, O. A.; Bocharov, A. N.; Laptenok, V. D.
2016-11-01
The paper solves the problem of automated process control of space vehicles waveguide paths soldering by means of induction heating. The peculiarities of the induction soldering process are analyzed and necessity of information-control system automation is identified. The developed automated system makes the control of the product heating process, by varying the power supplied to the inductor, on the basis of information about the soldering zone temperature, and stabilizing the temperature in a narrow range above the melting point of the solder but below the melting point of the waveguide. This allows the soldering process automating to improve the quality of the waveguides and eliminate burn-troughs. The article shows a block diagram of a software system consisting of five modules, and describes the main algorithm of its work. Also there is a description of the waveguide paths automated soldering system operation, for explaining the basic functions and limitations of the system. The developed software allows setting of the measurement equipment, setting and changing parameters of the soldering process, as well as view graphs of temperatures recorded by the system. There is shown the results of experimental studies that prove high quality of soldering process control and the system applicability to the tasks of automation.
Analysis And Control System For Automated Welding
NASA Technical Reports Server (NTRS)
Powell, Bradley W.; Burroughs, Ivan A.; Kennedy, Larry Z.; Rodgers, Michael H.; Goode, K. Wayne
1994-01-01
Automated variable-polarity plasma arc (VPPA) welding apparatus operates under electronic supervision by welding analysis and control system. System performs all major monitoring and controlling functions. It acquires, analyzes, and displays weld-quality data in real time and adjusts process parameters accordingly. Also records pertinent data for use in post-weld analysis and documentation of quality. System includes optoelectronic sensors and data processors that provide feedback control of welding process.
Vincent, F; Guyomard, S; Goury, V; Darbord, J C
1987-06-01
The study of growth curves of Klebsiella pneumoniae and Staphylococcus aureus in presence of five antiseptics, established using a MS2 Abbott system is presented. From our results, the advantages of automation after the adaptation of the method for the determination of bactericidal properties are examined. This technique may be proposed for the quality control of such drugs.
Comparison of VMAT and IMRT strategies for cervical cancer patients using automated planning.
Sharfo, Abdul Wahab M; Voet, Peter W J; Breedveld, Sebastiaan; Mens, Jan Willem M; Hoogeman, Mischa S; Heijmen, Ben J M
2015-03-01
In a published study on cervical cancer, 5-beam IMRT was inferior to single arc VMAT. Here we compare 9, 12, and 20 beam IMRT with single and dual arc VMAT. For each of 10 patients, automated plan generation with the in-house Erasmus-iCycle optimizer was used to assist an expert planner in generating the five plans with the clinical TPS. For each patient, all plans were clinically acceptable with a high and similar PTV coverage. OAR sparing increased when going from 9 to 12 to 20 IMRT beams, and from single to dual arc VMAT. For all patients, 12 and 20 beam IMRT were superior to single and dual arc VMAT, with substantial variations in gain among the study patients. As expected, delivery of VMAT plans was significantly faster than delivery of IMRT plans. Often reported increased plan quality for VMAT compared to IMRT has not been observed for cervical cancer. Twenty and 12 beam IMRT plans had a higher quality than single and dual arc VMAT. For individual patients, the optimal delivery technique depends on a complex trade-off between plan quality and treatment time that may change with introduction of faster delivery systems. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The "hospital central laboratory": automation, integration and clinical usefulness.
Zaninotto, Martina; Plebani, Mario
2010-07-01
Recent technological developments in laboratory medicine have led to a major challenge, maintaining a close connection between the search of efficiency through automation and consolidation and the assurance of effectiveness. The adoption of systems that automate most of the manual tasks characterizing routine activities has significantly improved the quality of laboratory performance; total laboratory automation being the paradigm of the idea that "human-less" robotic laboratories may allow for better operation and insuring less human errors. Furthermore, even if ongoing technological developments have considerably improved the productivity of clinical laboratories as well as reducing the turnaround time of the entire process, the value of qualified personnel remains a significant issue. Recent evidence confirms that automation allows clinical laboratories to improve analytical performances only if trained staff operate in accordance with well-defined standard operative procedures, thus assuring continuous monitoring of the analytical quality. In addition, laboratory automation may improve the appropriateness of test requests through the use of algorithms and reflex testing. This should allow the adoption of clinical and biochemical guidelines. In conclusion, in laboratory medicine, technology represents a tool for improving clinical effectiveness and patient outcomes, but it has to be managed by qualified laboratory professionals.
Managing laboratory automation
Saboe, Thomas J.
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed. PMID:18925018
Managing laboratory automation.
Saboe, T J
1995-01-01
This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.
"First generation" automated DNA sequencing technology.
Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M
2011-10-01
Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.
Conversion of Radiology Reporting Templates to the MRRT Standard.
Kahn, Charles E; Genereaux, Brad; Langlotz, Curtis P
2015-10-01
In 2013, the Integrating the Healthcare Enterprise (IHE) Radiology workgroup developed the Management of Radiology Report Templates (MRRT) profile, which defines both the format of radiology reporting templates using an extension of Hypertext Markup Language version 5 (HTML5), and the transportation mechanism to query, retrieve, and store these templates. Of 200 English-language report templates published by the Radiological Society of North America (RSNA), initially encoded as text and in an XML schema language, 168 have been converted successfully into MRRT using a combination of automated processes and manual editing; conversion of the remaining 32 templates is in progress. The automated conversion process applied Extensible Stylesheet Language Transformation (XSLT) scripts, an XML parsing engine, and a Java servlet. The templates were validated for proper HTML5 and MRRT syntax using web-based services. The MRRT templates allow radiologists to share best-practice templates across organizations and have been uploaded to the template library to supersede the prior XML-format templates. By using MRRT transactions and MRRT-format templates, radiologists will be able to directly import and apply templates from the RSNA Report Template Library in their own MRRT-compatible vendor systems. The availability of MRRT-format reporting templates will stimulate adoption of the MRRT standard and is expected to advance the sharing and use of templates to improve the quality of radiology reports.
Robotic and automatic welding development at the Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Jones, C. S.; Jackson, M. E.; Flanigan, L. A.
1988-01-01
Welding automation is the key to two major development programs to improve quality and reduce the cost of manufacturing space hardware currently undertaken by the Materials and Processes Laboratory of the NASA Marshall Space Flight Center. Variable polarity plasma arc welding has demonstrated its effectiveness on class 1 aluminum welding in external tank production. More than three miles of welds were completed without an internal defect. Much of this success can be credited to automation developments which stabilize the process. Robotic manipulation technology is under development for automation of welds on the Space Shuttle's main engines utilizing pathfinder systems in development of tooling and sensors for the production applications. The overall approach to welding automation development undertaken is outlined. Advanced sensors and control systems methodologies are described that combine to make aerospace quality welds with a minimum of dependence on operator skill.
Conceptual design of an aircraft automated coating removal system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.; Draper, J.V.; Pin, F.G.
1996-05-01
Paint stripping of the U.S. Air Force`s large transport aircrafts is currently a labor-intensive, manual process. Significant reductions in costs, personnel and turnaround time can be accomplished by the judicious use of automation in some process tasks. This paper presents the conceptual design of a coating removal systems for the tail surfaces of the C-5 plane. Emphasis is placed on the technology selection to optimize human-automation synergy with respect to overall costs, throughput, quality, safety, and reliability. Trade- offs between field-proven vs. research-requiring technologies, and between expected gain vs. cost and complexity, have led to a conceptual design which ismore » semi-autonomous (relying on the human for task specification and disturbance handling) yet incorporates sensor- based automation (for sweep path generation and tracking, surface following, stripping quality control and tape/breach handling).« less
FY 1997 Financial Reporting by The Defense Automated Printing Service.
1998-09-23
it •T o7’v ort FY 1997 FINANCIAL REPORTING BY THE DEFENSE AUTOMATED PRINTING SERVICE Report No. 98-201 September 23, 1998 Office of the Inspector...ACCOUNTING SERVICE DIRECTOR, DEFENSE LOGISTICS AGENCY DIRECTOR, DEFENSE AUTOMATED PRINTING SERVICE SUBJECT: Audit Report on FY 1997 Financial Reporting by the...General for Auditing Office of the Inspector General, DoD Report No. 98-201 September 23, 1998 (Project No. 8FJ-2002.04) FY 1997 Financial Reporting by the
Bell, Michael J; Gillespie, Colin S; Swan, Daniel; Lord, Phillip
2012-09-15
Annotations are a key feature of many biological databases, used to convey our knowledge of a sequence to the reader. Ideally, annotations are curated manually, however manual curation is costly, time consuming and requires expert knowledge and training. Given these issues and the exponential increase of data, many databases implement automated annotation pipelines in an attempt to avoid un-annotated entries. Both manual and automated annotations vary in quality between databases and annotators, making assessment of annotation reliability problematic for users. The community lacks a generic measure for determining annotation quality and correctness, which we look at addressing within this article. Specifically we investigate word reuse within bulk textual annotations and relate this to Zipf's Principle of Least Effort. We use the UniProt Knowledgebase (UniProtKB) as a case study to demonstrate this approach since it allows us to compare annotation change, both over time and between automated and manually curated annotations. By applying power-law distributions to word reuse in annotation, we show clear trends in UniProtKB over time, which are consistent with existing studies of quality on free text English. Further, we show a clear distinction between manual and automated analysis and investigate cohorts of protein records as they mature. These results suggest that this approach holds distinct promise as a mechanism for judging annotation quality. Source code is available at the authors website: http://homepages.cs.ncl.ac.uk/m.j.bell1/annotation. phillip.lord@newcastle.ac.uk.
Automatic structured grid generation using Gridgen (some restrictions apply)
NASA Technical Reports Server (NTRS)
Chawner, John R.; Steinbrenner, John P.
1995-01-01
The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.
Automated surface quality inspection with ARGOS: a case study
NASA Astrophysics Data System (ADS)
Kiefhaber, Daniel; Etzold, Fabian; Warken, Arno F.; Asfour, Jean-Michel
2017-06-01
The commercial availability of automated inspection systems for optical surfaces specified according to ISO 10110-7 promises unsupervised and automated quality control with reproducible results. In this study, the classification results of the ARGOS inspection system are compared to the decisions by well-trained inspectors based on manual-visual inspection. Both are found to agree in 93.6% of the studied cases. Exemplary cases with differing results are studied, and shown to be partly caused by shortcomings of the ISO 10110-7 standard, which was written for the industry standard manual-visual inspection. Applying it to high resolution images of the whole surface of objective machine vision systems brings with it a few challenges which are discussed.
Kirkendall, E S; Spires, W L; Mottes, T A; Schaffzin, J K; Barclay, C; Goldstein, S L
2014-01-01
Nephrotoxic medication-associated acute kidney injury (NTMx-AKI) is a costly clinical phenomenon and more common than previously recognized. Prior efforts to use technology to identify AKI have focused on detection after renal injury has occurred. Describe an approach and provide a technical framework for the creation of risk-stratifying AKI triggers and the development of an application to manage the AKI trigger data. Report the performance characteristics of those triggers and the refinement process and on the challenges of implementation. Initial manual trigger screening guided design of an automated electronic trigger report. A web-based application was designed to alleviate inefficiency and serve as a user interface and central workspace for the project. Performance of the NTMx exposure trigger reports from September 2011 to September 2013 were evaluated using sensitivity (SN), specificity (SP), positive and negative predictive values (PPV, NPV). Automated reports were created to replace manual screening for NTMx-AKI. The initial performance of the NTMx exposure triggers for SN, SP, PPV, and NPV all were ≥0.78, and increased over the study, with all four measures reaching ≥0.95 consistently. A web-based application was implemented that simplifies data entry and couriering from the reports, expedites results viewing, and interfaces with an automated data visualization tool. Sociotechnical challenges were logged and reported. We have built a risk-stratifying system based on electronic triggers that detects patients at-risk for NTMx-AKI before injury occurs. The performance of the NTMx-exposed reports has neared 100% through iterative optimization. The complexity of the trigger logic and clinical workflows surrounding NTMx-AKI led to a challenging implementation, but one that has been successful from technical, clinical, and quality improvement standpoints. This report summarizes the construction of a trigger-based application, the performance of the triggers, and the challenges uncovered during the design, build, and implementation of the system.
1975-12-01
139 APPENDIX A* BASIC CONCEPT OF MILITARY TECHNICAL CONTROL.142 6 APIENDIX Es TEST EQUIPMENI REQUIRED FOR lEASURF.4ENr OF 1AF’AMETE RS...Control ( SATEC ) Automatic Facilities heport Army Automated Quality Monitoring Reporting System (AQMPS) Army Autcmated Technical Control-Semi (ATC-Semi...technical control then beco.. es equipment status monitoring. All the major equipment in a system wculd have internal sensors with properly selected parameters
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2011-06-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention's HIV testing recommendations.
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2016-01-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program’s results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention’s HIV testing recommendations. PMID:21689041
Petrova, Darinka Todorova; Cocisiu, Gabriela Ariadna; Eberle, Christoph; Rhode, Karl-Heinz; Brandhorst, Gunnar; Walson, Philip D; Oellerich, Michael
2013-09-01
The aim of this study was to develop a novel method for automated quantification of cell-free hemoglobin (fHb) based on the HI (Roche Diagnostics). The novel fHb method based on the HI was correlated with fHb measured using the triple wavelength methods of both Harboe [fHb, g/L = (0.915 * HI + 2.634)/100] and Fairbanks et al. [fHb, g/L = (0.917 * HI + 2.131)/100]. fHb concentrations were estimated from the HI using the Roche Modular automated platform in self-made and commercially available quality controls, as well as samples from a proficiency testing scheme (INSTAND). The fHb using Roche automated HI results were then compared to results obtained using the traditional spectrophotometric assays for one hundred plasma samples with varying degrees of hemolysis, lipemia and/or bilirubinemia. The novel method using automated HI quantification on the Roche Modular clinical chemistry platform correlated well with results using the classical methods in the 100 patient samples (Harboe: r = 0.9284; Fairbanks et al.: r = 0.9689) and recovery was good for self-made controls. However, commercially available quality controls showed poor recovery due to an unidentified matrix problem. The novel method produced reliable determination of fHb in samples without interferences. However, poor recovery using commercially available fHb quality control samples currently greatly limits its usefulness. © 2013.
Maximizing coupling-efficiency of high-power diode lasers utilizing hybrid assembly technology
NASA Astrophysics Data System (ADS)
Zontar, D.; Dogan, M.; Fulghum, S.; Müller, T.; Haag, S.; Brecher, C.
2015-03-01
In this paper, we present hybrid assembly technology to maximize coupling efficiency for spatially combined laser systems. High quality components, such as center-turned focusing units, as well as suitable assembly strategies are necessary to obtain highest possible output ratios. Alignment strategies are challenging tasks due to their complexity and sensitivity. Especially in low-volume production fully automated systems are economically at a disadvantage, as operator experience is often expensive. However reproducibility and quality of automatically assembled systems can be superior. Therefore automated and manual assembly techniques are combined to obtain high coupling efficiency while preserving maximum flexibility. The paper will describe necessary equipment and software to enable hybrid assembly processes. Micromanipulator technology with high step-resolution and six degrees of freedom provide a large number of possible evaluation points. Automated algorithms are necess ary to speed-up data gathering and alignment to efficiently utilize available granularity for manual assembly processes. Furthermore, an engineering environment is presented to enable rapid prototyping of automation tasks with simultaneous data ev aluation. Integration with simulation environments, e.g. Zemax, allows the verification of assembly strategies in advance. Data driven decision making ensures constant high quality, documents the assembly process and is a basis for further improvement. The hybrid assembly technology has been applied on several applications for efficiencies above 80% and will be discussed in this paper. High level coupling efficiency has been achieved with minimized assembly as a result of semi-automated alignment. This paper will focus on hybrid automation for optimizing and attaching turning mirrors and collimation lenses.
Command and Control Common Semantic Core Required to Enable Net-centric Operations
2008-05-20
automated processing capability. A former US Marine Corps component C4 director during Operation Iraqi Freedom identified the problems of 1) uncertainty...interoperability improvements to warfighter community processes, thanks to ubiquitous automated processing , are likely high and somewhat easier to quantify. A...synchronized with the actions of other partners / warfare communities. This requires high- quality information, rapid sharing and automated processing – which
Finding the ’RITE’ Acquisition Environment for Navy C2 Software
2015-05-01
Boiler plate contract language - Gov purpose Rights • Adding expectation of quality to contracting language • Template SOW’s created Pr...Debugger MCCABE IQ Static Analysis Cyclomatic Complexity and KSLOC. All Languages HP Fortify Security Scan STIG and Vulnerabilities Security & IA...GSSAT (GOTS) Security Scan STIG and Vulnerabilities AutoIT Automated Test Scripting Engine for Automation Functional Testing TestComplete Automated
Habash, Marc; Johns, Robert
2009-10-01
This study compared an automated Escherichia coli and coliform detection system with the membrane filtration direct count technique for water testing. The automated instrument performed equal to or better than the membrane filtration test in analyzing E. coli-spiked samples and blind samples with interference from Proteus vulgaris or Aeromonas hydrophila.
Automating the application of smart materials for protein crystallization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khurshid, Sahir; Govada, Lata; EL-Sharif, Hazim F.
2015-03-01
The first semi-liquid, non-protein nucleating agent for automated protein crystallization trials is described. This ‘smart material’ is demonstrated to induce crystal growth and will provide a simple, cost-effective tool for scientists in academia and industry. The fabrication and validation of the first semi-liquid nonprotein nucleating agent to be administered automatically to crystallization trials is reported. This research builds upon prior demonstration of the suitability of molecularly imprinted polymers (MIPs; known as ‘smart materials’) for inducing protein crystal growth. Modified MIPs of altered texture suitable for high-throughput trials are demonstrated to improve crystal quality and to increase the probability of successmore » when screening for suitable crystallization conditions. The application of these materials is simple, time-efficient and will provide a potent tool for structural biologists embarking on crystallization trials.« less
Automated Tumor Registry for Oncology. A VA-DHCP MUMPS application.
Richie, S
1992-01-01
The VA Automated Tumor Registry for Oncology, Version 2, is a multifaceted, completely automated user-friendly cancer database. Easy to use modules include: Automatic Casefinding; Suspense Files; Abstracting and Printing; Follow-up; Annual Reports; Statistical Reports; Utility Functions.
Software development infrastructure for the HYBRID modeling and simulation project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the wholemore » problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers involved in the project. Thirdly, to exchange documents quickly, a SharePoint directory has been set-up. SharePoint allows teams and organizations to intelligently share, and collaborate on content from anywhere.« less
Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.
Moore, G W; Berman, J J
1991-01-01
Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions.
Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.
Moore, G. W.; Berman, J. J.
1991-01-01
Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions. PMID:1807773
A framework for automatic information quality ranking of diabetes websites.
Belen Sağlam, Rahime; Taskaya Temizel, Tugba
2015-01-01
Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).
Industrial applications of automated X-ray inspection
NASA Astrophysics Data System (ADS)
Shashishekhar, N.
2015-03-01
Many industries require that 100% of manufactured parts be X-ray inspected. Factors such as high production rates, focus on inspection quality, operator fatigue and inspection cost reduction translate to an increasing need for automating the inspection process. Automated X-ray inspection involves the use of image processing algorithms and computer software for analysis and interpretation of X-ray images. This paper presents industrial applications and illustrative case studies of automated X-ray inspection in areas such as automotive castings, fuel plates, air-bag inflators and tires. It is usually necessary to employ application-specific automated inspection strategies and techniques, since each application has unique characteristics and interpretation requirements.
Information systems as a quality management tool in clinical laboratories
NASA Astrophysics Data System (ADS)
Schmitz, Vanessa; Rosecler Bez el Boukhari, Marta
2007-11-01
This article describes information systems as a quality management tool in clinical laboratories. The quality of laboratory analyses is of fundamental importance for health professionals in aiding appropriate diagnosis and treatment. Information systems allow the automation of internal quality management processes, using standard sample tests, Levey-Jennings charts and Westgard multirule analysis. This simplifies evaluation and interpretation of quality tests and reduces the possibility of human error. This study proposes the development of an information system with appropriate functions and costs for the automation of internal quality control in small and medium-sized clinical laboratories. To this end, it evaluates the functions and usability of two commercial software products designed for this purpose, identifying the positive features of each, so that these can be taken into account during the development of the proposed system.
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E
2011-01-01
The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.
Near real time water quality monitoring of Chivero and Manyame lakes of Zimbabwe
NASA Astrophysics Data System (ADS)
Muchini, Ronald; Gumindoga, Webster; Togarepi, Sydney; Pinias Masarira, Tarirai; Dube, Timothy
2018-05-01
Zimbabwe's water resources are under pressure from both point and non-point sources of pollution hence the need for regular and synoptic assessment. In-situ and laboratory based methods of water quality monitoring are point based and do not provide a synoptic coverage of the lakes. This paper presents novel methods for retrieving water quality parameters in Chivero and Manyame lakes, Zimbabwe, from remotely sensed imagery. Remotely sensed derived water quality parameters are further validated using in-situ data. It also presents an application for automated retrieval of those parameters developed in VB6, as well as a web portal for disseminating the water quality information to relevant stakeholders. The web portal is developed, using Geoserver, open layers and HTML. Results show the spatial variation of water quality and an automated remote sensing and GIS system with a web front end to disseminate water quality information.
Automated Tumor Registry for Oncology. A VA-DHCP MUMPS application.
Richie, S.
1992-01-01
The VA Automated Tumor Registry for Oncology, Version 2, is a multifaceted, completely automated user-friendly cancer database. Easy to use modules include: Automatic Casefinding; Suspense Files; Abstracting and Printing; Follow-up; Annual Reports; Statistical Reports; Utility Functions. PMID:1482866
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
2017-07-27
This presentation describes a problem and methodology pertaining to automated blazar light curves. Namely, optical variability patterns for blazars require the construction of light curves and in order to generate the light curves, data must be filtered before processing to ensure quality.
NASA Astrophysics Data System (ADS)
Morgan, E. L.; Eagleson, K. W.; Hermann, R.; McCollough, N. D.
1981-05-01
Maintaining adequate water quality in a multipurpose drainage system becomes increasingly important as demands on resources become greater. Real-time water quality monitoring plays a crucial role in meeting this objective. In addition to remote automated physical monitoring, developments at the end of the 1970's allow simultaneous real-time measurements of fish breathing response to water quality changes. These advantages complement complex in-stream surveys typically carried out to evaluate the environmental quality of a system. Automated biosensing units having remote capabilities are designed to aid in the evaluation of subtle water quality changes contributing to undesirable conditions in a drainage basin. Using microprocessor-based monitors to measure fish breathing rates, the biosensing units are interfaced to a U.S. National Aeronautics and Space Administration (N.A.S.A.) remote data collection platform for National Oceanic and Atmospheric Administration (N.O.A.A.) GOES satellite retrieval and transmission of data. Simultaneously, multiparameter physical information is collected from site-specific locations and recovered in a similar manner. Real-time biological and physical data received at a data processing center are readily available for interpretation by resource managers. Management schemes incorporating real-time monitoring networks into on-going programs to simultaneously retrieve biological and physical data by satellite, radio and telephone cable give added advantages in maintaining water quality for multipurpose needs.
15 CFR 30.71 - False or fraudulent reporting on or misuse of the Automated Export System.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false False or fraudulent reporting on or misuse of the Automated Export System. 30.71 Section 30.71 Commerce and Foreign Trade Regulations... REGULATIONS Penalties § 30.71 False or fraudulent reporting on or misuse of the Automated Export System. (a...
Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A
2010-06-01
To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.
Jahns, Lisa; Johnson, LuAnn K; Scheett, Angela J; Stote, Kim S; Raatz, Susan K; Subar, Amy F; Tande, Desiree
2016-12-01
Systematic seasonal bias may confound efforts to estimate usual dietary intake and diet quality. Little is known about dietary quality over the winter holiday season. The aims of this study were to test for differences in intakes of energy, percentage of energy from macronutrients, fruits and vegetables, and diet quality measured using the Healthy Eating Index 2010 (HEI-2010) by calendar and winter holiday seasons. Longitudinal cohort design. Data were derived from the Life in All Seasons study. Two cohorts of women aged 40 to 60 years (N=52) from the greater Grand Forks, ND, area were followed for 1 year each between July 2012 and July 2014. Each woman completed an online diet recall using the Automated Self-Administered 24-Hour Recall every 10 days during the year, with a 92% response rate. Effects of calendar and winter holiday seasons on intakes of energy, percent energy from macronutrients, HEI-2010 total and component scores, and grams per day of individual fruits and vegetables were tested using mixed linear models. The mean total HEI-2010 score was 60.1±1.4. There were seasonal differences in some HEI-2010 component scores, but not in total scores. More lettuce or mixed lettuce salad was consumed during summer than during winter (P=0.034), and more fresh tomatoes were consumed during summer and fall compared with winter (P=0.001). More corn, berries, peaches and nectarines, and melons (P<0.001) were consumed during summer. There was no seasonal difference in reported intakes of energy (P=0.793). The total HEI-2010 score for dietary intake observed over the winter holiday season was lower than the rest of the year (P<0.001). Reported energy intake was not different (P=0.228). In this population, diet quality is significantly lower during the winter holiday period, but mostly consistent by season. Multiple recalls in any season can give a reasonable representation of usual overall diet quality throughout the year. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Bepko, Robert J; Moore, John R; Coleman, John R
2009-01-01
This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.
Report of Workshop on Methodology for Evaluating Potential Lunar Resources Sites
NASA Technical Reports Server (NTRS)
Williams, R. J.; Hubbard, N.
1981-01-01
The type and quantity of lunar materials needed to support a space power satellite program was used to define the type and quality of geological information required to certify a site for exploitation. The existing geological, geochemical, and geophysical data are summarized. The difference between these data and the required data for exploitation is used to define program requirements. Most of these requirements involve linear extensions of existing capabilities, fuller utilization of existing data, or expanded use of automated systems.
NASA Astrophysics Data System (ADS)
Malavallon, Olivier
1995-04-01
Laser beam stripping can be achieved using several active materials: YAG, CO2 Tea, or Excimer. The YAG laser appears to be the most efficient laser assessed in this report. However, the results obtained for productivity, quality, and type of stripping were very poor. Also, for stripping and on account of its specifications, the laser beam can only be used in an automated manner. In spite of these results, it seems that certain companies in Europe have recently developed technical solutions allowing better results to be obtained.
NASA Technical Reports Server (NTRS)
Hyman, William A. (Editor); Goldstein, Stanley H. (Editor)
1991-01-01
Presented here is a compilation of the final reports of the research projects done by the faculty members during the summer of 1991. Topics covered include optical correlation; lunar production and application of solar cells and synthesis of diamond film; software quality assurance; photographic image resolution; target detection using fractal geometry; evaluation of fungal metabolic compounds released to the air in a restricted environment; and planning and resource management in an intelligent automated power management system.
Kim, Youngjun; Gobbel, Glenn Temple; Matheny, Michael E; Redd, Andrew; Bray, Bruce E; Heidenreich, Paul; Bolton, Dan; Heavirland, Julia; Kelly, Natalie; Reeves, Ruth; Kalsy, Megha; Goldstein, Mary Kane; Meystre, Stephane M
2018-01-01
Background We developed an accurate, stakeholder-informed, automated, natural language processing (NLP) system to measure the quality of heart failure (HF) inpatient care, and explored the potential for adoption of this system within an integrated health care system. Objective To accurately automate a United States Department of Veterans Affairs (VA) quality measure for inpatients with HF. Methods We automated the HF quality measure Congestive Heart Failure Inpatient Measure 19 (CHI19) that identifies whether a given patient has left ventricular ejection fraction (LVEF) <40%, and if so, whether an angiotensin-converting enzyme inhibitor or angiotensin-receptor blocker was prescribed at discharge if there were no contraindications. We used documents from 1083 unique inpatients from eight VA medical centers to develop a reference standard (RS) to train (n=314) and test (n=769) the Congestive Heart Failure Information Extraction Framework (CHIEF). We also conducted semi-structured interviews (n=15) for stakeholder feedback on implementation of the CHIEF. Results The CHIEF classified each hospitalization in the test set with a sensitivity (SN) of 98.9% and positive predictive value of 98.7%, compared with an RS and SN of 98.5% for available External Peer Review Program assessments. Of the 1083 patients available for the NLP system, the CHIEF evaluated and classified 100% of cases. Stakeholders identified potential implementation facilitators and clinical uses of the CHIEF. Conclusions The CHIEF provided complete data for all patients in the cohort and could potentially improve the efficiency, timeliness, and utility of HF quality measurements. PMID:29335238
The Use of AMET & Automated Scripts for Model Evaluation
Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.
Automated Bus Diagnostic System Demonstration in New York City
DOT National Transportation Integrated Search
1983-12-01
In response to a growing problem with the quality and efficiency of nationwide bus maintenance practices, an award was granted to the Tri-State Regional Planning Commission for the testing of an automated bus diagnostic system (ABDS). The ABDS was de...
U.S. Geological Survey Catskill/Delaware Water-Quality Network: Water-Quality Report Water Year 2006
McHale, Michael R.; Siemion, Jason
2010-01-01
The U.S. Geological Survey operates a 60-station streamgaging network in the New York City Catskill/Delaware Water Supply System. Water-quality samples were collected at 13 of the stations in the Catskill/Delaware streamgaging network to provide resource managers with water-quality and water-quantity data from the water-supply system that supplies about 85 percent of the water needed by the more than 9 million residents of New York City. This report summarizes water-quality data collected at those 13 stations plus one additional station operated as a part of the U.S. Environmental Protection Agency's Regional Long-Term Monitoring Network for the 2006 water year (October 1, 2005 to September 30, 2006). An average of 62 water-quality samples were collected at each station during the 2006 water year, including grab samples collected every other week and storm samples collected with automated samplers. On average, 8 storms were sampled at each station during the 2006 water year. The 2006 calendar year was the second warmest on record and the summer of 2006 was the wettest on record for the northeastern United States. A large storm on June 26-28, 2006, caused extensive flooding in the western part of the network where record peak flows were measured at several watersheds.
Data quality can make or break a research infrastructure
NASA Astrophysics Data System (ADS)
Pastorello, G.; Gunter, D.; Chu, H.; Christianson, D. S.; Trotta, C.; Canfora, E.; Faybishenko, B.; Cheah, Y. W.; Beekwilder, N.; Chan, S.; Dengel, S.; Keenan, T. F.; O'Brien, F.; Elbashandy, A.; Poindexter, C.; Humphrey, M.; Papale, D.; Agarwal, D.
2017-12-01
Research infrastructures (RIs) commonly support observational data provided by multiple, independent sources. Uniformity in the data distributed by such RIs is important in most applications, e.g., in comparative studies using data from two or more sources. Achieving uniformity in terms of data quality is challenging, especially considering that many data issues are unpredictable and cannot be detected until a first occurrence of the issue. With that, many data quality control activities within RIs require a manual, human-in-the-loop element, making it an expensive activity. Our motivating example is the FLUXNET2015 dataset - a collection of ecosystem-level carbon, water, and energy fluxes between land and atmosphere from over 200 sites around the world, some sites with over 20 years of data. About 90% of the human effort to create the dataset was spent in data quality related activities. Based on this experience, we have been working on solutions to increase the automation of data quality control procedures. Since it is nearly impossible to fully automate all quality related checks, we have been drawing from the experience with techniques used in software development, which shares a few common constraints. In both managing scientific data and writing software, human time is a precious resource; code bases, as Science datasets, can be large, complex, and full of errors; both scientific and software endeavors can be pursued by individuals, but collaborative teams can accomplish a lot more. The lucrative and fast-paced nature of the software industry fueled the creation of methods and tools to increase automation and productivity within these constraints. Issue tracking systems, methods for translating problems into automated tests, powerful version control tools are a few examples. Terrestrial and aquatic ecosystems research relies heavily on many types of observational data. As volumes of data collection increases, ensuring data quality is becoming an unwieldy challenge for RIs. Business as usual approaches to data quality do not work with larger data volumes. We believe RIs can benefit greatly from adapting and imitating this body of theory and practice from software quality into data quality, enabling systematic and reproducible safeguards against errors and mistakes in datasets as much as in software.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostick, Debra A.; Hexel, Cole R.; Ticknor, Brian W.
2016-11-01
To shorten the lengthy and costly manual chemical purification procedures, sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment. This addresses a serious need in the nuclear safeguards community to debottleneck the separation of U and Pu in environmental samples—currently performed by overburdened chemists—with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on current COTS equipment that was modified for U/Pu separations utilizing Eichrom™ TEVA and UTEVA resins. Initial verification of individual columns yielded small elution volumes with consistent elution profiles and good recovery. Combined columnmore » calibration demonstrated ample separation without crosscontamination of the eluent. Automated packing and unpacking of the built-in columns initially showed >15% deviation in resin loading by weight, which can lead to inconsistent separations. Optimization of the packing and unpacking methods led to a reduction in the variability of the packed resin to less than 5% daily. The reproducibility of the automated system was tested with samples containing 30 ng U and 15 pg Pu, which were separated in a series with alternating reagent blanks. These experiments showed very good washout of both the resin and the sample from the columns as evidenced by low blank values. Analysis of the major and minor isotope ratios for U and Pu provided values well within data quality limits for the International Atomic Energy Agency. Additionally, system process blanks spiked with 233U and 244Pu tracers were separated using the automated system after it was moved outside of a clean room and yielded levels equivalent to clean room blanks, confirming that the system can produce high quality results without the need for expensive clean room infrastructure. Comparison of the amount of personnel time necessary for successful manual vs. automated chemical separations showed a significant decrease in hands-on time from 9.8 hours to 35 minutes for seven samples, respectively. This documented time savings and reduced labor translates to a significant cost savings per sample. Overall, the system will enable faster sample reporting times at reduced costs by limiting personnel hours dedicated to the chemical separation.« less
Hoppe, Christian; Obermeier, Patrick; Muehlhans, Susann; Alchikh, Maren; Seeber, Lea; Tief, Franziska; Karsch, Katharina; Chen, Xi; Boettcher, Sindy; Diedrich, Sabine; Conrad, Tim; Kisler, Bron; Rath, Barbara
2016-10-01
Regulatory authorities often receive poorly structured safety reports requiring considerable effort to investigate potential adverse events post hoc. Automated question-and-answer systems may help to improve the overall quality of safety information transmitted to pharmacovigilance agencies. This paper explores the use of the VACC-Tool (ViVI Automated Case Classification Tool) 2.0, a mobile application enabling physicians to classify clinical cases according to 14 pre-defined case definitions for neuroinflammatory adverse events (NIAE) and in full compliance with data standards issued by the Clinical Data Interchange Standards Consortium. The validation of the VACC-Tool 2.0 (beta-version) was conducted in the context of a unique quality management program for children with suspected NIAE in collaboration with the Robert Koch Institute in Berlin, Germany. The VACC-Tool was used for instant case classification and for longitudinal follow-up throughout the course of hospitalization. Results were compared to International Classification of Diseases , Tenth Revision (ICD-10) codes assigned in the emergency department (ED). From 07/2013 to 10/2014, a total of 34,368 patients were seen in the ED, and 5243 patients were hospitalized; 243 of these were admitted for suspected NIAE (mean age: 8.5 years), thus participating in the quality management program. Using the VACC-Tool in the ED, 209 cases were classified successfully, 69 % of which had been missed or miscoded in the ED reports. Longitudinal follow-up with the VACC-Tool identified additional NIAE. Mobile applications are taking data standards to the point of care, enabling clinicians to ascertain potential adverse events in the ED setting and during inpatient follow-up. Compliance with Clinical Data Interchange Standards Consortium (CDISC) data standards facilitates data interoperability according to regulatory requirements.
Optimising mHealth helpdesk responsiveness in South Africa: towards automated message triage
Engelhard, Matthew; Copley, Charles; Watson, Jacqui; Pillay, Yogan; Barron, Peter
2018-01-01
In South Africa, a national-level helpdesk was established in August 2014 as a social accountability mechanism for improving governance, allowing recipients of public sector services to send complaints, compliments and questions directly to a team of National Department of Health (NDoH) staff members via text message. As demand increases, mechanisms to streamline and improve the helpdesk must be explored. This work aims to evaluate the need for and feasibility of automated message triage to improve helpdesk responsiveness to high-priority messages. Drawing from 65 768 messages submitted between October 2016 and July 2017, the quality of helpdesk message handling was evaluated via detailed inspection of (1) a random sample of 481 messages and (2) messages reporting mistreatment of women, as identified using expert-curated keywords. Automated triage was explored by training a naïve Bayes classifier to replicate message labels assigned by NDoH staff. Classifier performance was evaluated on 12 526 messages withheld from the training set. 90 of 481 (18.7%) NDoH responses were scored as suboptimal or incorrect, with median response time of 4.0 hours. 32 reports of facility-based mistreatment and 39 of partner and family violence were identified; NDoH response time and appropriateness for these messages were not superior to the random sample (P>0.05). The naïve Bayes classifier had average accuracy of 85.4%, with ≥98% specificity for infrequently appearing (<50%) labels. These results show that helpdesk handling of mistreatment of women could be improved. Keyword matching and naïve Bayes effectively identified uncommon messages of interest and could support automated triage to improve handling of high-priority messages. PMID:29713508
Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S
2009-11-01
Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.
Sorge, John P; Harmon, C Reid; Sherman, Susan M; Baillie, E Eugene
2005-07-01
We used data management software to compare pathology report data concerning regional lymph node sampling for colorectal carcinoma from 2 institutions using different dissection methods. Data were retrieved from 2 disparate anatomic pathology information systems for all cases of colorectal carcinoma in 2003 involving the ascending and descending colon. Initial sorting of the data included overall lymph node recovery to assess differences between the dissection methods at the 2 institutions. Additional segregation of the data was used to challenge the application's capability of accurately addressing the complexity of the process. This software approach can be used to evaluate data from disparate computer systems, and we demonstrate how an automated function can enable institutions to compare internal pathologic assessment processes and the results of those comparisons. The use of this process has future implications for pathology quality assurance in other areas.
The Architecture Design of Detection and Calibration System for High-voltage Electrical Equipment
NASA Astrophysics Data System (ADS)
Ma, Y.; Lin, Y.; Yang, Y.; Gu, Ch; Yang, F.; Zou, L. D.
2018-01-01
With the construction of Material Quality Inspection Center of Shandong electric power company, Electric Power Research Institute takes on more jobs on quality analysis and laboratory calibration for high-voltage electrical equipment, and informationization construction becomes urgent. In the paper we design a consolidated system, which implements the electronic management and online automation process for material sampling, test apparatus detection and field test. In the three jobs we use QR code scanning, online Word editing and electronic signature. These techniques simplify the complex process of warehouse management and testing report transferring, and largely reduce the manual procedure. The construction of the standardized detection information platform realizes the integrated management of high-voltage electrical equipment from their networking, running to periodic detection. According to system operation evaluation, the speed of transferring report is doubled, and querying data is also easier and faster.
Lawrence, Justin; Delaney, Conor P.
2013-01-01
Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes. PMID:24436649
Lawrence, Justin; Delaney, Conor P
2013-03-01
Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes.
Ethics, finance, and automation: a preliminary survey of problems in high frequency trading.
Davis, Michael; Kumiega, Andrew; Van Vliet, Ben
2013-09-01
All of finance is now automated, most notably high frequency trading. This paper examines the ethical implications of this fact. As automation is an interdisciplinary endeavor, we argue that the interfaces between the respective disciplines can lead to conflicting ethical perspectives; we also argue that existing disciplinary standards do not pay enough attention to the ethical problems automation generates. Conflicting perspectives undermine the protection those who rely on trading should have. Ethics in finance can be expanded to include organizational and industry-wide responsibilities to external market participants and society. As a starting point, quality management techniques can provide a foundation for a new cross-disciplinary ethical standard in the age of automation.
A system-level approach to automation research
NASA Technical Reports Server (NTRS)
Harrison, F. W.; Orlando, N. E.
1984-01-01
Automation is the application of self-regulating mechanical and electronic devices to processes that can be accomplished with the human organs of perception, decision, and actuation. The successful application of automation to a system process should reduce man/system interaction and the perceived complexity of the system, or should increase affordability, productivity, quality control, and safety. The expense, time constraints, and risk factors associated with extravehicular activities have led the Automation Technology Branch (ATB), as part of the NASA Automation Research and Technology Program, to investigate the use of robots and teleoperators as automation aids in the context of space operations. The ATB program addresses three major areas: (1) basic research in autonomous operations, (2) human factors research on man-machine interfaces with remote systems, and (3) the integration and analysis of automated systems. This paper reviews the current ATB research in the area of robotics and teleoperators.
Machine vision system: a tool for quality inspection of food and agricultural products.
Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A
2012-04-01
Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.
NASA Astrophysics Data System (ADS)
Keene, Samuel T.; Cerussi, Albert E.; Warren, Robert V.; Hill, Brian; Roblyer, Darren; Leproux, AnaÑ--s.; Durkin, Amanda F.; O'Sullivan, Thomas D.; Haghany, Hosain; Mantulin, William W.; Tromberg, Bruce J.
2013-03-01
Instrument equivalence and quality control are critical elements of multi-center clinical trials. We currently have five identical Diffuse Optical Spectroscopic Imaging (DOSI) instruments enrolled in the American College of Radiology Imaging Network (ACRIN, #6691) trial located at five academic clinical research sites in the US. The goal of the study is to predict the response of breast tumors to neoadjuvant chemotherapy in 60 patients. In order to reliably compare DOSI measurements across different instruments, operators and sites, we must be confident that the data quality is comparable. We require objective and reliable methods for identifying, correcting, and rejecting low quality data. To achieve this goal, we developed and tested an automated quality control algorithm that rejects data points below the instrument noise floor, improves tissue optical property recovery, and outputs a detailed data quality report. Using a new protocol for obtaining dark-noise data, we applied the algorithm to ACRIN patient data and successfully improved the quality of recovered physiological data in some cases.
Automation of large scale transient protein expression in mammalian cells
Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.
2011-01-01
Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074
Leb, Victoria; Stöcher, Markus; Valentine-Thon, Elizabeth; Hölzl, Gabriele; Kessler, Harald; Stekel, Herbert; Berg, Jörg
2004-02-01
We report on the development of a fully automated real-time PCR assay for the quantitative detection of hepatitis B virus (HBV) DNA in plasma with EDTA (EDTA plasma). The MagNA Pure LC instrument was used for automated DNA purification and automated preparation of PCR mixtures. Real-time PCR was performed on the LightCycler instrument. An internal amplification control was devised as a PCR competitor and was introduced into the assay at the stage of DNA purification to permit monitoring for sample adequacy. The detection limit of the assay was found to be 200 HBV DNA copies/ml, with a linear dynamic range of 8 orders of magnitude. When samples from the European Union Quality Control Concerted Action HBV Proficiency Panel 1999 were examined, the results were found to be in acceptable agreement with the HBV DNA concentrations of the panel members. In a clinical laboratory evaluation of 123 EDTA plasma samples, a significant correlation was found with the results obtained by the Roche HBV Monitor test on the Cobas Amplicor analyzer within the dynamic range of that system. In conclusion, the newly developed assay has a markedly reduced hands-on time, permits monitoring for sample adequacy, and is suitable for the quantitative detection of HBV DNA in plasma in a routine clinical laboratory.
Control and automation of the Pegasus multi-point Thomson scattering system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodner, G. M., E-mail: gbodner@wisc.edu; Bongard, M. W.; Fonck, R. J.
A new control system for the Pegasus Thomson scattering diagnostic has recently been deployed to automate the laser operation, data collection process, and interface with the system-wide Pegasus control code. Automation has been extended to areas outside of data collection, such as manipulation of beamline cameras and remotely controlled turning mirror actuators to enable intra-shot beam alignment. Additionally, the system has been upgraded with a set of fast (∼1 ms) mechanical shutters to mitigate contamination from background light. Modification and automation of the Thomson system have improved both data quality and diagnostic reliability.
Control and automation of the Pegasus multi-point Thomson scattering system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodner, Grant M.; Bongard, Michael W.; Fonck, Raymond J.
A new control system for the Pegasus Thomson scattering diagnostic has recently been deployed to automate the laser operation, data collection process, and interface with the system-wide Pegasus control code. Automation has been extended to areas outside of data collection, such as manipulation of beamline cameras and remotely controlled turning mirror actuators to enable intra-shot beam alignment. In addition, the system has been upgraded with a set of fast (~1 ms) mechanical shutters to mitigate contamination from background light. Modification and automation of the Thomson system have improved both data quality and diagnostic reliability.
Spaceport Command and Control System Automated Testing
NASA Technical Reports Server (NTRS)
Stein, Meriel
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Spaceport Command and Control System Automation Testing
NASA Technical Reports Server (NTRS)
Hwang, Andrew
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administrations (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires high quality testing that will properly measure the capabilities of the system. Automating the test procedures would save the project time and money. Therefore, the Electrical Engineering Division at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Biomek 3000: the workhorse in an automated accredited forensic genetic laboratory.
Stangegaard, Michael; Meijer, Per-Johan; Børsting, Claus; Hansen, Anders J; Morling, Niels
2012-10-01
We have implemented and validated automated protocols for a wide range of processes such as sample preparation, PCR setup, and capillary electrophoresis setup using small, simple, and inexpensive automated liquid handlers. The flexibility and ease of programming enable the Biomek 3000 to be used in many parts of the laboratory process in a modern forensic genetics laboratory with low to medium sample throughput. In conclusion, we demonstrated that sample processing for accredited forensic genetic DNA typing can be implemented on small automated liquid handlers, leading to the reduction of manual work as well as increased quality and throughput.
Control and automation of the Pegasus multi-point Thomson scattering system
Bodner, Grant M.; Bongard, Michael W.; Fonck, Raymond J.; ...
2016-08-12
A new control system for the Pegasus Thomson scattering diagnostic has recently been deployed to automate the laser operation, data collection process, and interface with the system-wide Pegasus control code. Automation has been extended to areas outside of data collection, such as manipulation of beamline cameras and remotely controlled turning mirror actuators to enable intra-shot beam alignment. In addition, the system has been upgraded with a set of fast (~1 ms) mechanical shutters to mitigate contamination from background light. Modification and automation of the Thomson system have improved both data quality and diagnostic reliability.
30 CFR 227.600 - What automated verification functions may a State perform?
Code of Federal Regulations, 2010 CFR
2010-07-01
... involves systematic monitoring of production and royalty reports to identify and resolve reporting or... reported by royalty reporters to sales and transfer volumes reported by production reporters. If you request delegation of automated comparison of sales and production volumes, you must perform at least the...
Hussain, Waqar; Moens, Nathalie; Veraitch, Farlan S.; Hernandez, Diana; Mason, Chris; Lye, Gary J.
2013-01-01
The use of embryonic stem cells (ESCs) and their progeny in high throughput drug discovery and regenerative medicine will require production at scale of well characterized cells at an appropriate level of purity. The adoption of automated bioprocessing techniques offers the possibility to overcome the lack of consistency and high failure rates seen with current manual protocols. To build the case for increased use of automation this work addresses the key question: “can an automated system match the quality of a highly skilled and experienced person working manually?” To answer this we first describe an integrated automation platform designed for the ‘hands-free’ culture and differentiation of ESCs in microwell formats. Next we outline a framework for the systematic investigation and optimization of key bioprocess variables for the rapid establishment of validatable Standard Operating Procedures (SOPs). Finally the experimental comparison between manual and automated bioprocessing is exemplified by expansion of the murine Oct-4-GiP ESC line over eight sequential passages with their subsequent directed differentiation into neural precursors. Our results show that ESCs can be effectively maintained and differentiated in a highly reproducible manner by the automated system described. Statistical analysis of the results for cell growth over single and multiple passages shows up to a 3-fold improvement in the consistency of cell growth kinetics with automated passaging. The quality of the cells produced was evaluated using a panel of biological markers including cell growth rate and viability, nutrient and metabolite profiles, changes in gene expression and immunocytochemistry. Automated processing of the ESCs had no measurable negative effect on either their pluripotency or their ability to differentiate into the three embryonic germ layers. Equally important is that over a 6-month period of culture without antibiotics in the medium, we have not had any cases of culture contamination. This study thus confirms the benefits of adopting automated bioprocess routes to produce cells for therapy and for use in basic discovery research. PMID:23956681
The Benefits of Office Automation: A Casebook
1986-04-01
BENEFITS OF OFFICE AUTOMATION: 12 . PERSONAL AUTHOR(S) Warren, Hoyt M. Jr. Major, USAF 13. TYPE OF REPORT 113b. TIME COVERED 14. DATE OF REPORT Yr.. o...SELECTED OFFICE AUTOMATION EXPERIENCES........................ 12 4.1. Introduction ........... * ... * ** .................... 12 4.2. Laboratory Office...Network Experiment............... 12 4.2.1. Background and Scope......................... 12 4.2.2. Experiment Objectives
A Multifaceted Approach to Improving Outcomes in the NICU: The Pediatrix 100 000 Babies Campaign.
Ellsbury, Dan L; Clark, Reese H; Ursprung, Robert; Handler, Darren L; Dodd, Elizabeth D; Spitzer, Alan R
2016-04-01
Despite advances in neonatal medicine, infants requiring neonatal intensive care continue to experience substantial morbidity and mortality. The purpose of this initiative was to generate large-scale simultaneous improvements in multiple domains of care in a large neonatal network through a program called the "100,000 Babies Campaign." Key drivers of neonatal morbidity and mortality were identified. A system for retrospective morbidity and mortality review was used to identify problem areas for project prioritization. NICU system analysis and staff surveys were used to facilitate reengineering of NICU systems in 5 key driver areas. Electronic health record-based automated data collection and reporting were used. A quality improvement infrastructure using the Kotter organizational change model was developed to support the program. From 2007 to 2013, data on 422 877 infants, including a subset with birth weight of 501 to 1500 g (n = 58 555) were analyzed. Key driver processes (human milk feeding, medication use, ventilator days, admission temperature) all improved (P < .0001). Mortality, necrotizing enterocolitis, retinopathy of prematurity, bacteremia after 3 days of life, and catheter-associated infection decreased. Survival without significant morbidity (necrotizing enterocolitis, severe intraventricular hemorrhage, severe retinopathy of prematurity, oxygen use at 36 weeks' gestation) improved. Implementation of a multifaceted quality improvement program that incorporated organizational change theory and automated electronic health record-based data collection and reporting program resulted in major simultaneous improvements in key neonatal processes and outcomes. Copyright © 2016 by the American Academy of Pediatrics.
Should we use automated external defibrillators in hospital wards?
De Regge, M; Monsieurs, K G; Vandewoude, K; Calle, P A
2012-01-01
Automated external defibrillators (AEDs) have shown to improve survival after cardiopulmonary arrest (CPA) in many, but not all clinical settings. A recent study reported that the use of AEDs in-hospital did not improve survival. The current retrospective study reports the results of an in-hospital AED programme in a university hospital, and focuses on the quality of AED use. At Ghent University Hospital 30 AEDs were placed in non-monitored hospital wards and outpatient clinics treating patients with non-cardiac problems. Nurses were trained to use these devices. From November 2006 until March 2011, the AEDs were used in 23 of 39 CPA cases, in only one patient the presenting heart rhythm was ventricular fibrillation and this patient survived. Pulseless electrical activity was present in 14 patients (four survived) and asystole in eight patients (one survived). AEDs were attached to eight patients without CPA, and in 16 patients with CPA AED was not used. The quality of AED use was often suboptimal as illustrated by external artifacts during the first rhythm analysis by the AED in 30% (7/23) and more than 20 seconds delay before restart of chest compressions after the AED rhythm analysis in 50% (9/18). The literature data, supported by our results, indicate that in-hospital AED programmes are unlikely to improve survival after CPA. Moreover, their use is often suboptimal. Therefore, if AEDs are introduced in a hospital, initial training, frequent retraining and close follow-up are essential.
The impact of automation on pharmacy staff experience of workplace stressors.
James, K Lynette; Barlow, Dave; Bithell, Anne; Hiom, Sarah; Lord, Sue; Oakley, Pat; Pollard, Mike; Roberts, Dave; Way, Cheryl; Whittlesea, Cate
2013-04-01
Determine the effect of installing an original pack automated dispensing system (ADS) on staff experience of occupational stressors. Pharmacy staff in a National Health Service hospital in Wales, UK, were administered an anonymous occupational stressor questionnaire pre- (n = 45) and post-automation (n = 32). Survey responses pre- and post-automation were compared using Mann-Whitney U test. Statistical significance was P ≤ 0.05. Four focus groups were conducted (two groups of accredited checking technicians (ACTs) (group 1: n = 4; group 2: n = 6), one group of pharmacists (n = 17), and one group of technicians (n = 4) post-automation to explore staff experiences of occupational stressors. Focus group transcripts were analysed according to framework analysis. Survey response rate pre-automation was 78% (n = 35) and 49% (n = 16) post-automation. Automation had a positive impact on staff experience of stress (P = 0.023), illogical workload allocation (P = 0.004) and work-life balance (P = 0.05). All focus-group participants reported that automation had created a spacious working environment. Pharmacists and ACTs reported that automation had enabled the expansion of their roles. Technicians felt like 'production-line workers.' Robot malfunction was a source of stress. The findings suggest that automation had a positive impact on staff experience of stressors, improving working conditions and workload. Technicians reported that ADS devalued their skills. When installing ADS, pharmacy managers must consider the impact of automation on staff. Strategies to reduce stressors associated with automation include rotating staff activities and role expansions. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.
Experience with Quality Assurance in Two Store-and-Forward Telemedicine Networks.
Wootton, Richard; Liu, Joanne; Bonnardot, Laurent; Venugopal, Raghu; Oakley, Amanda
2015-01-01
Despite the increasing use of telemedicine around the world, little has been done to incorporate quality assurance (QA) into these operations. The purpose of the present study was to examine the feasibility of QA in store-and-forward teleconsulting using a previously published framework. During a 2-year study period, we examined the feasibility of using QA tools in two mature telemedicine networks [Médecins Sans Frontières (MSF) and New Zealand Teledermatology (NZT)]. The tools included performance reporting to assess trends, automated follow-up of patients to obtain outcomes data, automated surveying of referrers to obtain user feedback, and retrospective assessment of randomly selected cases to assess quality. In addition, the senior case coordinators in each network were responsible for identifying potential adverse events from email reports received from users. During the study period, there were 149 responses to the patient follow-up questions relating to the 1241 MSF cases (i.e., 12% of cases), and there were 271 responses to the follow-up questions relating to the 639 NZT cases (i.e., 42% of cases). The collection of user feedback reports was combined with the collection of patient follow-up data, thus producing the same response rates. The outcomes data suggested that the telemedicine advice proved useful for the referring doctor in the majority of cases and was likely to benefit the patient. The user feedback was overwhelmingly positive, over 90% of referrers in the two networks finding the advice received to be of educational benefit. The feedback also suggested that the teleconsultation had provided cost savings in about 20% of cases, either to the patient/family, or to the hospital/clinic treating the patient. Various problems were detected by regular monitoring, and certain adverse events were identified from email reports by the users. A single aberrant quality reading was detected by using a process control chart. The present study demonstrates that a QA program is feasible in store-and-forward telemedicine, and shows that it was useful in two different networks, because certain problems were detected (and then solved) that would not have been identified until much later. It seems likely that QA could be used much more widely in telemedicine generally to benefit patient care.
SU-C-BRB-01: Automated Dose Deformation for Re-Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, S; Kainz, K; Li, X
Purpose: An objective of retreatment planning is to minimize dose to previously irradiated tissues. Conventional retreatment planning is based largely on best-guess superposition of the previous treatment’s isodose lines. In this study, we report a rigorous, automated retreatment planning process to minimize dose to previously irradiated organs at risk (OAR). Methods: Data for representative patients previously treated using helical tomotherapy and later retreated in the vicinity of the original disease site were retrospectively analyzed in an automated fashion using a prototype treatment planning system equipped with a retreatment planning module (Accuray, Inc.). The initial plan’s CT, structures, and planned dosemore » were input along with the retreatment CT and structure set. Using a deformable registration algorithm implemented in the module, the initially planned dose and structures were warped onto the retreatment CT. An integrated third-party sourced software (MIM, Inc.) was used to evaluate registration quality and to contour overlapping regions between isodose lines and OARs, providing additional constraints during retreatment planning. The resulting plan and the conventionally generated retreatment plan were compared. Results: Jacobian maps showed good quality registration between the initial plan and retreatment CTs. For a right orbit case, the dose deformation facilitated delineating the regions of the eyes and optic chiasm originally receiving 13 to 42 Gy. Using these regions as dose constraints, the new retreatment plan resulted in V50 reduction of 28% for the right eye and 8% for the optic chiasm, relative to the conventional plan. Meanwhile, differences in the PTV dose coverage were clinically insignificant. Conclusion: Automated retreatment planning with dose deformation and definition of previously-irradiated regions allowed for additional planning constraints to be defined to minimize re-irradiation of OARs. For serial organs that do not recover from radiation damage, this method provides a more precise and quantitative means to limit cumulative dose. This research is partially supported by Accuray, Inc.« less
Cao, Weidong; Bean, Brian; Corey, Scott; Coursey, Johnathan S; Hasson, Kenton C; Inoue, Hiroshi; Isano, Taisuke; Kanderian, Sami; Lane, Ben; Liang, Hongye; Murphy, Brian; Owen, Greg; Shinoda, Nobuhiko; Zeng, Shulin; Knight, Ivor T
2016-06-01
We report the development of an automated genetic analyzer for human sample testing based on microfluidic rapid polymerase chain reaction (PCR) with high-resolution melting analysis (HRMA). The integrated DNA microfluidic cartridge was used on a platform designed with a robotic pipettor system that works by sequentially picking up different test solutions from a 384-well plate, mixing them in the tips, and delivering mixed fluids to the DNA cartridge. A novel image feedback flow control system based on a Canon 5D Mark II digital camera was developed for controlling fluid movement through a complex microfluidic branching network without the use of valves. The same camera was used for measuring the high-resolution melt curve of DNA amplicons that were generated in the microfluidic chip. Owing to fast heating and cooling as well as sensitive temperature measurement in the microfluidic channels, the time frame for PCR and HRMA was dramatically reduced from hours to minutes. Preliminary testing results demonstrated that rapid serial PCR and HRMA are possible while still achieving high data quality that is suitable for human sample testing. © 2015 Society for Laboratory Automation and Screening.
Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique
2016-01-01
High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Prediction of global and local model quality in CASP8 using the ModFOLD server.
McGuffin, Liam J
2009-01-01
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.
Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.
Denecke, Kerstin
2016-01-01
Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved.
Abstracts of AF Materials Laboratory Reports
1975-09-01
NO: TITLE: AUTHOR(S): CONTRACT NO; CONTRACTOR: AFML-TR-73-307 200,397 IMPROVED AUTOMATED TAPE LAYING MACHINE M. Poullos, W. J. Murray, D.L...AUTOMATED IMPROVED AUTOMATED TAPE LAYING MACHINE AUTOMATION AUTOMATION OF COATING PROCESSES FOR GAS TURBINE DLADcS AND VANES 203222/111 203072...IMP90VE0 TAPE LAYING MACHINE IMPP)VED AUTOMATED TAPE LAYING MACHINE A STUDY O^ THE STRESS-STRAIN TEHAVIOR OF GRAPHITE
Automating Nuclear-Safety-Related SQA Procedures with Custom Applications
Freels, James D.
2016-01-01
Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.
Improving Learning Object Quality: Moodle HEODAR Implementation
ERIC Educational Resources Information Center
Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M.
2012-01-01
Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…
Tools for automating spacecraft ground systems: The Intelligent Command and Control (ICC) approach
NASA Technical Reports Server (NTRS)
Stoffel, A. William; Mclean, David
1996-01-01
The practical application of scripting languages and World Wide Web tools to the support of spacecraft ground system automation, is reported on. The mission activities and the automation tools used at the Goddard Space Flight Center (MD) are reviewed. The use of the Tool Command Language (TCL) and the Practical Extraction and Report Language (PERL) scripting tools for automating mission operations is discussed together with the application of different tools for the Compton Gamma Ray Observatory ground system.
Prototype automated post-MECO ascent I-load Verification Data Table
NASA Technical Reports Server (NTRS)
Lardas, George D.
1990-01-01
A prototype automated processor for quality assurance of Space Shuttle post-Main Engine Cut Off (MECO) ascent initialization parameters (I-loads) is described. The processor incorporates Clips rules adapted from the quality assurance criteria for the post-MECO ascent I-loads. Specifically, the criteria are implemented for nominal and abort targets, as given in the 'I-load Verification Data Table, Part 3, Post-MECO Ascent, Version 2.1, December 1989.' This processor, ivdt, compares a given l-load set with the stated mission design and quality assurance criteria. It determines which I-loads violate the stated criteria, and presents a summary of I-loads that pass or fail the tests.
Development and implementation of an automated quantitative film digitizer quality control program
NASA Astrophysics Data System (ADS)
Fetterly, Kenneth A.; Avula, Ramesh T. V.; Hangiandreou, Nicholas J.
1999-05-01
A semi-automated, quantitative film digitizer quality control program that is based on the computer analysis of the image data from a single digitized test film was developed. This program includes measurements of the geometric accuracy, optical density performance, signal to noise ratio, and presampled modulation transfer function. The variability of the measurements was less than plus or minus 5%. Measurements were made on a group of two clinical and two laboratory laser film digitizers during a trial period of approximately four months. Quality control limits were established based on clinical necessity, vendor specifications and digitizer performance. During the trial period, one of the digitizers failed the performance requirements and was corrected by calibration.
Technology demonstration of space intravehicular automation and robotics
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Barker, L. Keith
1994-01-01
Automation and robotic technologies are being developed and capabilities demonstrated which would increase the productivity of microgravity science and materials processing in the space station laboratory module, especially when the crew is not present. The Automation Technology Branch at NASA Langley has been working in the area of intravehicular automation and robotics (IVAR) to provide a user-friendly development facility, to determine customer requirements for automated laboratory systems, and to improve the quality and efficiency of commercial production and scientific experimentation in space. This paper will describe the IVAR facility and present the results of a demonstration using a simulated protein crystal growth experiment inside a full-scale mockup of the space station laboratory module using a unique seven-degree-of-freedom robot.
Aozan: an automated post-sequencing data-processing pipeline.
Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent
2017-07-15
Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Investigation of cloud/water vapor motion winds from geostationary satellite
NASA Technical Reports Server (NTRS)
1993-01-01
This report summarizes the research work accomplished on the NASA grant contract NAG8-892 during 1992. Research goals of this contract are the following: to complete upgrades to the Cooperative Institute for Meteorological Satellite Studies (CIMSS) wind system procedures for assigning heights and incorporating first guess information; to evaluate these modifications using simulated tracer fields; to add an automated quality control system to minimize the need for manual editing, while maintaining product quality; and to benchmark the upgraded algorithm in tests with NMC and/or MSFC. Work progressed on all these tasks and is detailed. This work was done in collaboration with CIMSS NOAA/NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.
Data Quality Screening Service
NASA Technical Reports Server (NTRS)
Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan
2013-01-01
A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.
Assessing Metadata Quality of a Federally Sponsored Health Data Repository.
Marc, David T; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui
2016-01-01
The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn't frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality.
Assessing Metadata Quality of a Federally Sponsored Health Data Repository
Marc, David T.; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui
2016-01-01
The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn’t frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality. PMID:28269883
Systematic Assessment of the Hemolysis Index: Pros and Cons.
Lippi, Giuseppe
2015-01-01
Preanalytical quality is as important as the analytical and postanalytical quality in laboratory diagnostics. After decades of visual inspection to establish whether or not a diagnostic sample may be suitable for testing, automated assessment of hemolysis index (HI) has now become available in a large number of laboratory analyzers. Although most national and international guidelines support systematic assessment of sample quality via HI, there is widespread perception that this indication has not been thoughtfully acknowledged. Potential explanations include concern of increased specimen rejection rate, poor harmonization of analytical techniques, lack of standardized units of measure, differences in instrument-specific cutoff, negative impact on throughput, organization and laboratory economics, and lack of a reliable quality control system. Many of these concerns have been addressed. Evidence now supports automated HI in improving quality and patient safety. These will be discussed. © 2015 Elsevier Inc. All rights reserved.
Feldmesser, Ester; Rosenwasser, Shilo; Vardi, Assaf; Ben-Dor, Shifra
2014-02-22
The advent of Next Generation Sequencing technologies and corresponding bioinformatics tools allows the definition of transcriptomes in non-model organisms. Non-model organisms are of great ecological and biotechnological significance, and consequently the understanding of their unique metabolic pathways is essential. Several methods that integrate de novo assembly with genome-based assembly have been proposed. Yet, there are many open challenges in defining genes, particularly where genomes are not available or incomplete. Despite the large numbers of transcriptome assemblies that have been performed, quality control of the transcript building process, particularly on the protein level, is rarely performed if ever. To test and improve the quality of the automated transcriptome reconstruction, we used manually defined and curated genes, several of them experimentally validated. Several approaches to transcript construction were utilized, based on the available data: a draft genome, high quality RNAseq reads, and ESTs. In order to maximize the contribution of the various data, we integrated methods including de novo and genome based assembly, as well as EST clustering. After each step a set of manually curated genes was used for quality assessment of the transcripts. The interplay between the automated pipeline and the quality control indicated which additional processes were required to improve the transcriptome reconstruction. We discovered that E. huxleyi has a very high percentage of non-canonical splice junctions, and relatively high rates of intron retention, which caused unique issues with the currently available tools. While individual tools missed genes and artificially joined overlapping transcripts, combining the results of several tools improved the completeness and quality considerably. The final collection, created from the integration of several quality control and improvement rounds, was compared to the manually defined set both on the DNA and protein levels, and resulted in an improvement of 20% versus any of the read-based approaches alone. To the best of our knowledge, this is the first time that an automated transcript definition is subjected to quality control using manually defined and curated genes and thereafter the process is improved. We recommend using a set of manually curated genes to troubleshoot transcriptome reconstruction.
Wolff, Reuben H.; Wong, Michael F.
2008-01-01
Since November 1998, water-quality data have been collected from the H-3 Highway Storm Drain C, which collects runoff from a 4-mi-long viaduct, and from Halawa Stream on Oahu, Hawaii. From January 2001 to August 2004, data were collected from the storm drain and four stream sites in the Halawa Stream drainage basin as part of the State of Hawaii Department of Transportation Storm Water Monitoring Program. Data from the stormwater monitoring program have been published in annual reports. This report uses these water-quality data to explore how the highway storm-drain runoff affects Halawa Stream and the factors that might be controlling the water quality in the drainage basin. In general, concentrations of nutrients, total dissolved solids, and total suspended solids were lower in highway runoff from Storm Drain C than at stream sites upstream and downstream of Storm Drain C. The opposite trend was observed for most trace metals, which generally occurred in higher concentrations in the highway runoff from Storm Drain C than in the samples collected from Halawa Stream. The absolute contribution from Storm Drain C highway runoff, in terms of total storm loads, was much smaller than at stations upstream and downstream, whereas the constituent yields (the relative contribution per unit drainage basin area) at Storm Drain C were comparable to or higher than storm yields at stations upstream and downstream. Most constituent concentrations and loads in stormwater runoff increased in a downstream direction. The timing of the storm sampling is an important factor controlling constituent concentrations observed in stormwater runoff samples. Automated point samplers were used to collect grab samples during the period of increasing discharge of the storm throughout the stormflow peak and during the period of decreasing discharge of the storm, whereas manually collected grab samples were generally collected during the later stages near the end of the storm. Grab samples were analyzed to determine concentrations and loads at a particular point in time. Flow-weighted time composite samples from the automated point samplers were analyzed to determine mean constituent concentrations or loads during a storm. Chemical analysis of individual grab samples from the automated point sampler at Storm Drain C demonstrated the ?first flush? phenomenon?higher constituent concentrations at the beginning of runoff events?for the trace metals cadmium, lead, zinc, and copper, whose concentrations were initially high during the period of increasing discharge and gradually decreased over the duration of the storm. Water-quality data from Storm Drain C and four stream sites were compared to the State of Hawaii Department of Health (HDOH) water-quality standards to determine the effects of highway storm runoff on the water quality of Halawa Stream. The geometric-mean standards and the 10- and 2-percent-of-the-time concentration standards for total nitrogen, nitrite plus nitrate, total phosphorus, total suspended solids, and turbidity were exceeded in many of the comparisons. However, these standards were not designed for stormwater sampling, in which constituent concentrations would be expected to increase for short periods of time. With the aim of enhancing the usefulness of the water-quality data, several modifications to the stormwater monitoring program are suggested. These suggestions include (1) the periodic analyzing of discrete samples from the automated point samplers over the course of a storm to get a clearer profile of the storm, from first flush to the end of the receding discharge; (2) adding an analysis of the dissolved fractions of metals to the sampling plan; (3) installation of an automatic sampler at Bridge 8 to enable sampling earlier in the storms; (4) a one-time sampling and analysis of soils upstream of Bridge 8 for base-line contaminant concentrations; (5) collection of samples from Halawa Stream during low-flow conditions
1990-03-29
This is our final report on the Audit of Automated Data Processing Support of Investigative and Security Missions at the Defense Investigative...Service for your information and use. Comments on a draft of this report were considered in preparing the final report. The audit was made from May through...October 1989. The objectives of the audit were to determine if the Defense Investigative Service (DIS) was effectively managing automated data
O'Connor, Annette M; Tsafnat, Guy; Gilbert, Stephen B; Thayer, Kristina A; Wolfe, Mary S
2018-01-09
The second meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 3-4 October 2016 in Philadelphia, Pennsylvania, USA. ICASR is an interdisciplinary group whose aim is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. Having automated tools for systematic review should enable more transparent and timely review, maximizing the potential for identifying and translating research findings to practical application. The meeting brought together multiple stakeholder groups including users of summarized research, methodologists who explore production processes and systematic review quality, and technologists such as software developers, statisticians, and vendors. This diversity of participants was intended to ensure effective communication with numerous stakeholders about progress toward automation of systematic reviews and stimulate discussion about potential solutions to identified challenges. The meeting highlighted challenges, both simple and complex, and raised awareness among participants about ongoing efforts by various stakeholders. An outcome of this forum was to identify several short-term projects that participants felt would advance the automation of tasks in the systematic review workflow including (1) fostering better understanding about available tools, (2) developing validated datasets for testing new tools, (3) determining a standard method to facilitate interoperability of tools such as through an application programming interface or API, and (4) establishing criteria to evaluate the quality of tools' output. ICASR 2016 provided a beneficial forum to foster focused discussion about tool development and resources and reconfirm ICASR members' commitment toward systematic reviews' automation.
NASA Technical Reports Server (NTRS)
1984-01-01
The two manufacturing concepts developed represent innovative, technologically advanced manufacturing schemes. The concepts were selected to facilitate an in depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, and artificial intelligence. While the cost effectiveness of these facilities has not been analyzed as part of this study, both appear entirely feasible for the year 2000 timeframe. The growing demand for high quality gallium arsenide microelectronics may warrant the ventures.
Recent trends in laboratory automation in the pharmaceutical industry.
Rutherford, M L; Stinger, T
2001-05-01
The impact of robotics and automation on the pharmaceutical industry over the last two decades has been significant. In the last ten years, the emphasis of laboratory automation has shifted from the support of manufactured products and quality control of laboratory applications, to research and development. This shift has been the direct result of an increased emphasis on the identification, development and eventual marketing of innovative new products. In this article, we will briefly identify and discuss some of the current trends in laboratory automation in the pharmaceutical industry as they apply to research and development, including screening, sample management, combinatorial chemistry, ADME/Tox and pharmacokinetics.
Solís-Marcos, Ignacio; Galvao-Carmona, Alejandro; Kircher, Katja
2017-01-01
Research on partially automated driving has revealed relevant problems with driving performance, particularly when drivers’ intervention is required (e.g., take-over when automation fails). Mental fatigue has commonly been proposed to explain these effects after prolonged automated drives. However, performance problems have also been reported after just a few minutes of automated driving, indicating that other factors may also be involved. We hypothesize that, besides mental fatigue, an underload effect of partial automation may also affect driver attention. In this study, such potential effect was investigated during short periods of partially automated and manual driving and at different speeds. Subjective measures of mental demand and vigilance and performance to a secondary task (an auditory oddball task) were used to assess driver attention. Additionally, modulations of some specific attention-related event-related potentials (ERPs, N1 and P3 components) were investigated. The mental fatigue effects associated with the time on task were also evaluated by using the same measurements. Twenty participants drove in a fixed-base simulator while performing an auditory oddball task that elicited the ERPs. Six conditions were presented (5–6 min each) combining three speed levels (low, comfortable and high) and two automation levels (manual and partially automated). The results showed that, when driving partially automated, scores in subjective mental demand and P3 amplitudes were lower than in the manual conditions. Similarly, P3 amplitude and self-reported vigilance levels decreased with the time on task. Based on previous studies, these findings might reflect a reduction in drivers’ attention resource allocation, presumably due to the underload effects of partial automation and to the mental fatigue associated with the time on task. Particularly, such underload effects on attention could explain the performance decrements after short periods of automated driving reported in other studies. However, further studies are needed to investigate this relationship in partial automation and in other automation levels. PMID:29163112
Quality control in urinalysis.
Takubo, T; Tatsumi, N
1999-01-01
Quality control (QC) has been introduced in laboratories, and QC surveys in urinalysis have been performed by College of American Pathologist, by Japanese Association of Medical Technologists, by Osaka Medical Association and by manufacturers. QC survey in urinalysis for synthetic urine by the reagent strip and instrument made in same manufacturer, and by an automated urine cell analyser provided satisfactory results among laboratories. QC survey in urinalysis for synthetic urine by the reagent strips and instruments made by various manufacturers indicated differences in the determination values among manufacturers, and between manual and automated methods because the reagent strips and instruments have different characteristics, respectively. QC photo survey in urinalysis on the microscopic photos of urine sediment constituents indicated differences in the identification of cells among laboratories. From the results, it is necessary to standardize a reagent strip method, manual and automated methods, and synthetic urine.
Honeywell Technical Order Transfer Tests.
1987-06-12
of simple corrections, a reasonable reproduction of the original could be generated. The quality was not good enough for a production environment. Lack of automated quality control (AQC) tools could account for the errors.
Toward automated assessment of health Web page quality using the DISCERN instrument.
Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael
2017-05-01
As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Serologic test systems development. Progress report, July 1, 1976--September 30, 1977
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, G.C.; Clinard, E.H.; Bartlett, M.L.
1978-01-01
Work has continued on the development and application of the Enzyme-Labeled Antibody (ELA) test to the USDA needs. Results on trichinosis, brucellosis, and staphylococcal enterotoxin A detection are very encouraging. A field test for trichinosis detection is being worked out in cooperation with Food Safety and Quality Service personnel. Work is in progress with the Technicon Instrument Corporation to develop a modification of their equipment to automatically process samples by the ELA procedure. An automated ELA readout instrument for 96-well trays has been completed and is being checked out.
Reaching out to clinicians: implementation of a computerized alert system.
Degnan, Dan; Merryfield, Dave; Hultgren, Steve
2004-01-01
Several published articles have identified that providing automated, computer-generated clinical alerts about potentially critical clinical situations should result in better quality of care. In 1999, the pharmacy department at a community hospital network implemented and refined a commercially available, computerized clinical alert system. This case report discusses the implementation process, gives examples of how the system is used, and describes results following implementation. The use of the clinical alert system in this hospital network resulted in improved patient safety as well as in greater efficiency and decreased costs.
Marsolo, Keith; Margolis, Peter A; Forrest, Christopher B; Colletti, Richard B; Hutton, John J
2015-01-01
We collaborated with the ImproveCareNow Network to create a proof-of-concept architecture for a network-based Learning Health System. This collaboration involved transitioning an existing registry to one that is linked to the electronic health record (EHR), enabling a "data in once" strategy. We sought to automate a series of reports that support care improvement while also demonstrating the use of observational registry data for comparative effectiveness research. We worked with three leading EHR vendors to create EHR-based data collection forms. We automated many of ImproveCareNow's analytic reports and developed an application for storing protected health information and tracking patient consent. Finally, we deployed a cohort identification tool to support feasibility studies and hypothesis generation. There is ongoing uptake of the system. To date, 31 centers have adopted the EHR-based forms and 21 centers are uploading data to the registry. Usage of the automated reports remains high and investigators have used the cohort identification tools to respond to several clinical trial requests. The current process for creating EHR-based data collection forms requires groups to work individually with each vendor. A vendor-agnostic model would allow for more rapid uptake. We believe that interfacing network-based registries with the EHR would allow them to serve as a source of decision support. Additional standards are needed in order for this vision to be achieved, however. We have successfully implemented a proof-of-concept Learning Health System while providing a foundation on which others can build. We have also highlighted opportunities where sponsors could help accelerate progress.
Massachusetts Institute of Technology Consortium Agreement
1999-03-01
This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.
ERIC Educational Resources Information Center
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel
2015-01-01
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
HDTS 2017.1 Testing and Verification Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, T.
2017-12-01
This report is a continuation of the series of Hunter Dose Tracking System (HDTS) Quality Assurance documents including (Foley and Powell, 2010; Dixon, 2012; Whiteside, 2017b). In this report we have created a suite of automated test cases and a system to analyze the results of those tests as well as documented the methodology to ensure the field system performs within specifications. The software test cases cover all of the functions and interactions of functions that are practical to test. With the developed framework, if software defects are discovered, it will be easy to create one or more test casesmore » to reproduce the defect and ensure that code changes correct the defect.« less
Mueller, David S.
2016-06-21
The software program, QRev applies common and consistent computational algorithms combined with automated filtering and quality assessment of the data to improve the quality and efficiency of streamflow measurements and helps ensure that U.S. Geological Survey streamflow measurements are consistent, accurate, and independent of the manufacturer of the instrument used to make the measurement. Software from different manufacturers uses different algorithms for various aspects of the data processing and discharge computation. The algorithms used by QRev to filter data, interpolate data, and compute discharge are documented and compared to the algorithms used in the manufacturers’ software. QRev applies consistent algorithms and creates a data structure that is independent of the data source. QRev saves an extensible markup language (XML) file that can be imported into databases or electronic field notes software. This report is the technical manual for version 2.8 of QRev.
Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E
2012-11-20
The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.
2012-01-01
The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231
Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah
2013-06-01
This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.
NASA Astrophysics Data System (ADS)
Theveneau, P.; Baker, R.; Barrett, R.; Beteva, A.; Bowler, M. W.; Carpentier, P.; Caserotto, H.; de Sanctis, D.; Dobias, F.; Flot, D.; Guijarro, M.; Giraud, T.; Lentini, M.; Leonard, G. A.; Mattenet, M.; McCarthy, A. A.; McSweeney, S. M.; Morawe, C.; Nanao, M.; Nurizzo, D.; Ohlsson, S.; Pernot, P.; Popov, A. N.; Round, A.; Royant, A.; Schmid, W.; Snigirev, A.; Surr, J.; Mueller-Dieckmann, C.
2013-03-01
Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This "first generation" of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2015-03-01
The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.
2015-01-01
Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579
Automated Microflow NMR: Routine Analysis of Five-Microliter Samples
Jansma, Ariane; Chuan, Tiffany; Geierstanger, Bernhard H.; Albrecht, Robert W.; Olson, Dean L.; Peck, Timothy L.
2006-01-01
A microflow CapNMR probe double-tuned for 1H and 13C was installed on a 400-MHz NMR spectrometer and interfaced to an automated liquid handler. Individual samples dissolved in DMSO-d6 are submitted for NMR analysis in vials containing as little as 10 μL of sample. Sets of samples are submitted in a low-volume 384-well plate. Of the 10 μL of sample per well, as with vials, 5 μL is injected into the microflow NMR probe for analysis. For quality control of chemical libraries, 1D NMR spectra are acquired under full automation from 384-well plates on as many as 130 compounds within 24 h using 128 scans per spectrum and a sample-to-sample cycle time of ∼11 min. Because of the low volume requirements and high mass sensitivity of the microflow NMR system, 30 nmol of a typical small molecule is sufficient to obtain high-quality, well-resolved, 1D proton or 2D COSY NMR spectra in ∼6 or 20 min of data acquisition time per experiment, respectively. Implementation of pulse programs with automated solvent peak identification and suppression allow for reliable data collection, even for samples submitted in fully protonated DMSO. The automated microflow NMR system is controlled and monitored using web-based software. PMID:16194121
Automation of Cataloging: Effects on Use of Staff, Efficiency, and Service to Patrons.
ERIC Educational Resources Information Center
Bednar, Marie
1988-01-01
Describes the effects of the automation of cataloging processes at Pennsylvania State University. The discussion covers the reorganization of professional and paraprofessional personnel and job responsibilities, staff reactions to the changes, the impact on cataloging quality and efficiency, and patron satisfaction with the services offered. (15…
Automated Inspection And Precise Grinding Of Gears
NASA Technical Reports Server (NTRS)
Frint, Harold; Glasow, Warren
1995-01-01
Method of precise grinding of spiral bevel gears involves automated inspection of gear-tooth surfaces followed by adjustments of machine-tool settings to minimize differences between actual and nominal surfaces. Similar to method described in "Computerized Inspection of Gear-Tooth Surfaces" (LEW-15736). Yields gears of higher quality, with significant reduction in manufacturing and inspection time.
ERIC Educational Resources Information Center
Wang, Lipeng; Li, Mingqiu
2012-01-01
Currently, it has become a fundamental goal for the engineering major to cultivate high-quality engineering technicians with innovation ability in scientific research which is an important academic ability necessary for them. This paper mainly explores the development of comprehensive and designing experiments in automation based on scientific…
An Automated Data Analysis Tool for Livestock Market Data
ERIC Educational Resources Information Center
Williams, Galen S.; Raper, Kellie Curry
2011-01-01
This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…
Histogram deconvolution - An aid to automated classifiers
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1983-01-01
It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.
Automated Formative Assessment as a Tool to Scaffold Student Documentary Writing
ERIC Educational Resources Information Center
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt
2012-01-01
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
[Influence of new technologies in modern microbiology].
Pumarola, Tomás
2010-10-01
The influence of new technologies in modern microbiology is directly related to their automation, the real driving force of change. Automation has occurred since the beginning of clinical microbiology, but from the 1980s has experienced huge development, which is being projected through the immediate future to all areas of the speciality. Automation has become a prime organizational tool. However, its main disadvantage is that it has no limits, which in association with the current economicallyoriented criteria, is encouraging initiatives to integrate the various laboratory specialities into one production center and, eventually, to outsource its activity. This process could significantly reduce the quality of clinical microbiology and the training of future specialists, or even worst, lead to the eventual disappearance of the speciality, at least as it is known today. The future development of highly automated and integrated laboratories is an irreversible process. To preserve the quality of the speciality and of specialist training, rather than fight directly against this process, we must, as microbiologists, actively participate with creativity and leadership. Copyright © 2010 Elsevier España S.L. All rights reserved.
Automation Improves Schedule Quality and Increases Scheduling Efficiency for Residents.
Perelstein, Elizabeth; Rose, Ariella; Hong, Young-Chae; Cohn, Amy; Long, Micah T
2016-02-01
Medical resident scheduling is difficult due to multiple rules, competing educational goals, and ever-evolving graduate medical education requirements. Despite this, schedules are typically created manually, consuming hours of work, producing schedules of varying quality, and yielding negative consequences for resident morale and learning. To determine whether computerized decision support can improve the construction of residency schedules, saving time and improving schedule quality. The Optimized Residency Scheduling Assistant was designed by a team from the University of Michigan Department of Industrial and Operations Engineering. It was implemented in the C.S. Mott Children's Hospital Pediatric Emergency Department in the 2012-2013 academic year. The 4 metrics of schedule quality that were compared between the 2010-2011 and 2012-2013 academic years were the incidence of challenging shift transitions, the incidence of shifts following continuity clinics, the total shift inequity, and the night shift inequity. All scheduling rules were successfully incorporated. Average schedule creation time fell from 22 to 28 hours to 4 to 6 hours per month, and 3 of 4 metrics of schedule quality significantly improved. For the implementation year, the incidence of challenging shift transitions decreased from 83 to 14 (P < .01); the incidence of postclinic shifts decreased from 72 to 32 (P < .01); and the SD of night shifts dropped by 55.6% (P < .01). This automated shift scheduling system improves the current manual scheduling process, reducing time spent and improving schedule quality. Embracing such automated tools can benefit residency programs with shift-based scheduling needs.
Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain
2013-08-01
In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
Automated packing systems: review of industrial implementations
NASA Astrophysics Data System (ADS)
Whelan, Paul F.; Batchelor, Bruce G.
1993-08-01
A rich theoretical background to the problems that occur in the automation of material handling can be found in operations research, production engineering, systems engineering and automation, more specifically machine vision, literature. This work has contributed towards the design of intelligent handling systems. This paper will review the application of these automated material handling and packing techniques to industrial problems. The discussion will also highlight the systems integration issues involved in these applications. An outline of one such industrial application, the automated placement of shape templates on to leather hides, is also discussed. The purpose of this system is to arrange shape templates on a leather hide in an efficient manner, so as to minimize the leather waste, before they are automatically cut from the hide. These pieces are used in the furniture and car manufacturing industries for the upholstery of high quality leather chairs and car seats. Currently this type of operation is semi-automated. The paper will outline the problems involved in the full automation of such a procedure.
Takamura, Akiteru; Ito, Sayori; Maruyama, Kaori; Ryo, Yusuke; Saito, Manami; Fujimura, Shuhei; Ishiura, Yuna; Hori, Ariyuki
2017-03-01
Automated external defibrillators (AED) have been installed in schools in Japan since 2004, and the government strongly recommends teaching basic life support (BLS). We therefore examined the quality of BLS education and AED installation in schools. We conducted a prefecture-wide questionnaire survey of all primary and junior high schools in 2016, to assess BLS education and AED installation against the recommendations of the Japan Circulation Society. The results were analyzed using descriptive statistics and chi-squared test. In total, 195 schools out of 315 (62%) responded, of which 38% have introduced BLS education for children. BLS training was held in a smaller proportion of primary schools (18%) than junior high schools (86%). More than 90% of primary school staff had undergone BLS training in the previous 2 years. The most common locations of AED were the gymnasium (32%) followed by entrance hall (28%), staffroom (25%), and infirmary (12%). The reasons given for location were that it was obvious (34%), convenient for staff (32%), could be used out of hours (17%), and the most likely location for a heart attack (15%). Approximately 18% of schools reported that it takes >5 min to reach the AED from the furthest point. BLS training, AED location, and understanding of both are not sufficient to save children's lives efficiently. Authorities should make recommendations about the correct number of AED, and their location, and provide more information to improve the quality of BLS training in schools. © 2016 Japan Pediatric Society.
Quality control in urodynamics and the role of software support in the QC procedure.
Hogan, S; Jarvis, P; Gammie, A; Abrams, P
2011-11-01
This article aims to identify quality control (QC) best practice, to review published QC audits in order to identify how closely good practice is followed, and to carry out a market survey of the software features that support QC offered by urodynamics machines available in the UK. All UK distributors of urodynamic systems were contacted and asked to provide information on the software features relating to data quality of the products they supply. The results of the market survey show that the features offered by manufacturers differ greatly. Automated features, which can be turned off in most cases, include: cough recognition, detrusor contraction detection, and high pressure alerts. There are currently no systems that assess data quality based on published guidelines. A literature review of current QC guidelines for urodynamics was carried out; QC audits were included in the literature review to see how closely guidelines were being followed. This review highlights the fact that basic QC is not being carried out effectively by urodynamicists. Based on the software features currently available and the results of the literature review there is both the need and capacity for a greater degree of automation in relation to urodynamic data quality and accuracy assessment. Some progress has been made in this area and certain manufacturers have already developed automated cough detection. Copyright © 2011 Wiley Periodicals, Inc.
Development of design principles for automated systems in transport control.
Balfe, Nora; Wilson, John R; Sharples, Sarah; Clarke, Theresa
2012-01-01
This article reports the results of a qualitative study investigating attitudes towards and opinions of an advanced automation system currently used in UK rail signalling. In-depth interviews were held with 10 users, key issues associated with automation were identified and the automation's impact on the signalling task investigated. The interview data highlighted the importance of the signallers' understanding of the automation and their (in)ability to predict its outputs. The interviews also covered the methods used by signallers to interact with and control the automation, and the perceived effects on their workload. The results indicate that despite a generally low level of understanding and ability to predict the actions of the automation system, signallers have developed largely successful coping mechanisms that enable them to use the technology effectively. These findings, along with parallel work identifying desirable attributes of automation from the literature in the area, were used to develop 12 principles of automation which can be used to help design new systems which better facilitate cooperative working. The work reported in this article was completed with the active involvement of operational rail staff who regularly use automated systems in rail signalling. The outcomes are currently being used to inform decisions on the extent and type of automation and user interfaces in future generations of rail control systems.
Automation in College Libraries.
ERIC Educational Resources Information Center
Werking, Richard Hume
1991-01-01
Reports the results of a survey of the "Bowdoin List" group of liberal arts colleges. The survey obtained information about (1) automation modules in place and when they had been installed; (2) financing of automation and its impacts on the library budgets; and (3) library director's views on library automation and the nature of the…
Sutton, Robert M.; Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay
2013-01-01
Objective To investigate the effectiveness of brief bedside “booster” cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Design Prospective, randomized trial. Setting General pediatric wards at Children’s Hospital of Philadelphia. Subjects Sixty-nine Basic Life Support–certified hospital-based providers. Intervention CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Measurements and Main Results Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min−1 and <120 min−1); 71% met minimum depth targets (depth, >38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p < .02]; automated feedback only: rate, 70% to 96% [p = .02], depth, 61% to 100% [p < .01], and overall CPR compliance, 35% to 96% [p < .01]; and instructor training combined with automated feedback: rate 48% to 100% [p < .01], depth, 78% to 100% [p < .02], and overall CPR compliance, 30% to 100% [p < .01]). Conclusions Before booster CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests. PMID:20625336
Sutton, Robert M; Niles, Dana; Meaney, Peter A; Aplenc, Richard; French, Benjamin; Abella, Benjamin S; Lengetti, Evelyn L; Berg, Robert A; Helfaer, Mark A; Nadkarni, Vinay
2011-05-01
To investigate the effectiveness of brief bedside "booster" cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Prospective, randomized trial. General pediatric wards at Children's Hospital of Philadelphia. Sixty-nine Basic Life Support-certified hospital-based providers. CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min(-1) and <120 min(-1)); 71% met minimum depth targets (depth, >38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p < .02]; automated feedback only: rate, 70% to 96% [p = .02], depth, 61% to 100% [p < .01], and overall CPR compliance, 35% to 96% [p < .01]; and instructor training combined with automated feedback: rate 48% to 100% [p < .01], depth, 78% to 100% [p < .02], and overall CPR compliance, 30% to 100% [p < .01]). Before booster CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests.
Gold, Christian; Körber, Moritz; Lechner, David; Bengler, Klaus
2016-06-01
The aim of this study was to quantify the impact of traffic density and verbal tasks on takeover performance in highly automated driving. In highly automated vehicles, the driver has to occasionally take over vehicle control when approaching system limits. To ensure safety, the ability of the driver to regain control of the driving task under various driving situations and different driver states needs to be quantified. Seventy-two participants experienced takeover situations requiring an evasive maneuver on a three-lane highway with varying traffic density (zero, 10, and 20 vehicles per kilometer). In a between-subjects design, half of the participants were engaged in a verbal 20-Questions Task, representing speaking on the phone while driving in a highly automated vehicle. The presence of traffic in takeover situations led to longer takeover times and worse takeover quality in the form of shorter time to collision and more collisions. The 20-Questions Task did not influence takeover time but seemed to have minor effects on the takeover quality. For the design and evaluation of human-machine interaction in takeover situations of highly automated vehicles, the traffic state seems to play a major role, compared to the driver state, manipulated by the 20-Questions Task. The present results can be used by developers of highly automated systems to appropriately design human-machine interfaces and to assess the driver's time budget for regaining control. © 2016, Human Factors and Ergonomics Society.
Yajuan, Xiao; Xin, Liang; Zhiyuan, Li
2012-01-01
The patch clamp technique is commonly used in electrophysiological experiments and offers direct insight into ion channel properties through the characterization of ion channel activity. This technique can be used to elucidate the interaction between a drug and a specific ion channel at different conformational states to understand the ion channel modulators’ mechanisms. The patch clamp technique is regarded as a gold standard for ion channel research; however, it suffers from low throughput and high personnel costs. In the last decade, the development of several automated electrophysiology platforms has greatly increased the screen throughput of whole cell electrophysiological recordings. New advancements in the automated patch clamp systems have aimed to provide high data quality, high content, and high throughput. However, due to the limitations noted above, automated patch clamp systems are not capable of replacing manual patch clamp systems in ion channel research. While automated patch clamp systems are useful for screening large amounts of compounds in cell lines that stably express high levels of ion channels, the manual patch clamp technique is still necessary for studying ion channel properties in some research areas and for specific cell types, including primary cells that have mixed cell types and differentiated cells that derive from induced pluripotent stem cells (iPSCs) or embryonic stem cells (ESCs). Therefore, further improvements in flexibility with regard to cell types and data quality will broaden the applications of the automated patch clamp systems in both academia and industry. PMID:23346269
Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary A.; Schetrit, Oren; Kiliccote, Sila
During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems,more » and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.« less
Vandenberghe, V; Goethals, P L M; Van Griensven, A; Meirlaen, J; De Pauw, N; Vanrolleghem, P; Bauwens, W
2005-09-01
During the summer of 1999, two automated water quality measurement stations were installed along the Dender river in Belgium. The variables dissolved oxygen, temperature, conductivity, pH, rain-intensity, flow and solar radiation were measured continuously. In this paper these on-line measurement series are presented and interpreted using also additional measurements and ecological expert-knowledge. The purpose was to demonstrate the variability in time and space of the aquatic processes and the consequences of conducting and interpreting discrete measurements for river quality assessment and management. The large fluctuations of the data illustrated the importance of continuous measurements for the complete description and modelling of the biological processes in the river.
NASA Astrophysics Data System (ADS)
Tripathi, K.
2013-01-01
In automated manual clutch (AMC) a mechatronic system controls clutch force trajectory through an actuator governed by a control system. The present study identifies relevant characteristics of this trajectory and their effects on driveline dynamics and engagement quality. A new type of force trajectory is identified which gives the good engagement quality. However this trajectory is not achievable through conventional clutch control mechanism. But in AMC a mechatronic system based on electro-hydraulic or electro-mechanical elements can make it feasible. A mechatronic system is presented in which a mechatronic add-on system can be used to implement the novel force trajectory, without the requirement of replacing the traditional diaphragm spring based clutch in a vehicle with manual transmission.
Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong
2016-01-08
RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.
Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.
2016-01-01
Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893
2017-03-01
Government Accountability Office Highlights of GAO-17-322, a report to congressional committees March 2017 DOD MAJOR AUTOMATED INFORMATION ...DOD MAJOR AUTOMATED INFORMATION SYSTEMS Improvements Can Be Made in Applying Leading Practices for Managing Risk and...Testing Report to Congressional Committees March 2017 GAO-17-322 United States Government Accountability Office United States
ERIC Educational Resources Information Center
Howrey, Mary M.
This study was funded by the Library Services and Construction Act (LSCA) to enable the Illinois School Library Media Association (ISLMA) to plan the automation of the state's school libraries. The research was intended to identify current national programs of interest to ISLMA, identify current automation programs within Illinois library systems,…
Automated sequence analysis and editing software for HIV drug resistance testing.
Struck, Daniel; Wallis, Carole L; Denisov, Gennady; Lambert, Christine; Servais, Jean-Yves; Viana, Raquel V; Letsoalo, Esrom; Bronze, Michelle; Aitken, Sue C; Schuurman, Rob; Stevens, Wendy; Schmit, Jean Claude; Rinke de Wit, Tobias; Perez Bercoff, Danielle
2012-05-01
Access to antiretroviral treatment in resource-limited-settings is inevitably paralleled by the emergence of HIV drug resistance. Monitoring treatment efficacy and HIV drugs resistance testing are therefore of increasing importance in resource-limited settings. Yet low-cost technologies and procedures suited to the particular context and constraints of such settings are still lacking. The ART-A (Affordable Resistance Testing for Africa) consortium brought together public and private partners to address this issue. To develop an automated sequence analysis and editing software to support high throughput automated sequencing. The ART-A Software was designed to automatically process and edit ABI chromatograms or FASTA files from HIV-1 isolates. The ART-A Software performs the basecalling, assigns quality values, aligns query sequences against a set reference, infers a consensus sequence, identifies the HIV type and subtype, translates the nucleotide sequence to amino acids and reports insertions/deletions, premature stop codons, ambiguities and mixed calls. The results can be automatically exported to Excel to identify mutations. Automated analysis was compared to manual analysis using a panel of 1624 PR-RT sequences generated in 3 different laboratories. Discrepancies between manual and automated sequence analysis were 0.69% at the nucleotide level and 0.57% at the amino acid level (668,047 AA analyzed), and discordances at major resistance mutations were recorded in 62 cases (4.83% of differences, 0.04% of all AA) for PR and 171 (6.18% of differences, 0.03% of all AA) cases for RT. The ART-A Software is a time-sparing tool for pre-analyzing HIV and viral quasispecies sequences in high throughput laboratories and highlighting positions requiring attention. Copyright © 2012 Elsevier B.V. All rights reserved.
Automated classification of self-grooming in mice using open-source software.
van den Boom, Bastijn J G; Pavlidi, Pavlina; Wolf, Casper J H; Mooij, Adriana H; Willuhn, Ingo
2017-09-01
Manual analysis of behavior is labor intensive and subject to inter-rater variability. Although considerable progress in automation of analysis has been made, complex behavior such as grooming still lacks satisfactory automated quantification. We trained a freely available, automated classifier, Janelia Automatic Animal Behavior Annotator (JAABA), to quantify self-grooming duration and number of bouts based on video recordings of SAPAP3 knockout mice (a mouse line that self-grooms excessively) and wild-type animals. We compared the JAABA classifier with human expert observers to test its ability to measure self-grooming in three scenarios: mice in an open field, mice on an elevated plus-maze, and tethered mice in an open field. In each scenario, the classifier identified both grooming and non-grooming with great accuracy and correlated highly with results obtained by human observers. Consistently, the JAABA classifier confirmed previous reports of excessive grooming in SAPAP3 knockout mice. Thus far, manual analysis was regarded as the only valid quantification method for self-grooming. We demonstrate that the JAABA classifier is a valid and reliable scoring tool, more cost-efficient than manual scoring, easy to use, requires minimal effort, provides high throughput, and prevents inter-rater variability. We introduce the JAABA classifier as an efficient analysis tool for the assessment of rodent self-grooming with expert quality. In our "how-to" instructions, we provide all information necessary to implement behavioral classification with JAABA. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of Automated Image Analysis Software for Suspended Marine Particle Classification
2003-09-30
Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated
[Problems with placement and using of automated external defibrillators in Czech Republic].
Olos, Tomás; Bursa, Filip; Gregor, Roman; Holes, David
2011-01-01
The use of automated external defibrillators improves the survival of adults who suffer from cardiopulmonary arrest. Automated external defibrillators detect ventricular fibrillation with almost perfect sensitivity and specificity. Authors describe the use of automated external defibrillator during cardiopulmonary resuscitation in a patient with sudden cardiac arrest during ice-hockey match. The article reports also the use of automated external defibrillators in children.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yung, J; Stefan, W; Reeve, D
2015-06-15
Purpose: Phantom measurements allow for the performance of magnetic resonance (MR) systems to be evaluated. Association of Physicists in Medicine (AAPM) Report No. 100 Acceptance Testing and Quality Assurance Procedures for MR Imaging Facilities, American College of Radiology (ACR) MR Accreditation Program MR phantom testing, and ACR MRI quality control (QC) program documents help to outline specific tests for establishing system performance baselines as well as system stability over time. Analyzing and processing tests from multiple systems can be time-consuming for medical physicists. Besides determining whether tests are within predetermined limits or criteria, monitoring longitudinal trends can also help preventmore » costly downtime of systems during clinical operation. In this work, a semi-automated QC program was developed to analyze and record measurements in a database that allowed for easy access to historical data. Methods: Image analysis was performed on 27 different MR systems of 1.5T and 3.0T field strengths from GE and Siemens manufacturers. Recommended measurements involved the ACR MRI Accreditation Phantom, spherical homogenous phantoms, and a phantom with an uniform hole pattern. Measurements assessed geometric accuracy and linearity, position accuracy, image uniformity, signal, noise, ghosting, transmit gain, center frequency, and magnetic field drift. The program was designed with open source tools, employing Linux, Apache, MySQL database and Python programming language for the front and backend. Results: Processing time for each image is <2 seconds. Figures are produced to show regions of interests (ROIs) for analysis. Historical data can be reviewed to compare previous year data and to inspect for trends. Conclusion: A MRI quality assurance and QC program is necessary for maintaining high quality, ACR MRI Accredited MR programs. A reviewable database of phantom measurements assists medical physicists with processing and monitoring of large datasets. Longitudinal data can reveal trends that although are within passing criteria indicate underlying system issues.« less
Fleischer, Heidi; Ramani, Kinjal; Blitti, Koffi; Roddelkopf, Thomas; Warkentin, Mareike; Behrend, Detlef; Thurow, Kerstin
2018-02-01
Automation systems are well established in industries and life science laboratories, especially in bioscreening and high-throughput applications. An increasing demand of automation solutions can be seen in the field of analytical measurement in chemical synthesis, quality control, and medical and pharmaceutical fields, as well as research and development. In this study, an automation solution was developed and optimized for the investigation of new biliary endoprostheses (stents), which should reduce clogging after implantation in the human body. The material inside the stents (incrustations) has to be controlled regularly and under identical conditions. The elemental composition is one criterion to be monitored in stent development. The manual procedure was transferred to an automated process including sample preparation, elemental analysis using inductively coupled plasma mass spectrometry (ICP-MS), and data evaluation. Due to safety issues, microwave-assisted acid digestion was executed outside of the automation system. The performance of the automated process was determined and validated. The measurement results and the processing times were compared for both the manual and the automated procedure. Finally, real samples of stent incrustations and pig bile were analyzed using the automation system.
33 CFR 161.21 - Automated reporting.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) PORTS AND WATERWAYS SAFETY VESSEL TRAFFIC MANAGEMENT Vessel Movement Reporting System § 161.21 Automated... Centers denoted in Table 161.12(c) of this part. (b) Should an AIS become non-operational, while or prior...
Intelligent Processing Equipment Projects at DLA
NASA Technical Reports Server (NTRS)
Obrien, Donald F.
1992-01-01
The Defense Logistics Agency is successfully incorporating Intelligent Processing Equipment (IPE) into each of its Manufacturing Technology thrust areas. Several IPE applications are addressed in the manufacturing of two 'soldier support' items: combat rations and military apparel. In combat rations, in-line sensors for food processing are being developed or modified from other industries. In addition, many process controls are being automated to achieve better quality and to gain higher use (soldier) acceptance. IPE applications in military apparel include: in-process quality controls for identification of sewing defects, use of robots in the manufacture of shirt collars, and automated handling of garments for pressing.
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.
2018-03-01
The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.
Intelligent processing equipment projects at DLA
NASA Astrophysics Data System (ADS)
Obrien, Donald F.
1992-04-01
The Defense Logistics Agency is successfully incorporating Intelligent Processing Equipment (IPE) into each of its Manufacturing Technology thrust areas. Several IPE applications are addressed in the manufacturing of two 'soldier support' items: combat rations and military apparel. In combat rations, in-line sensors for food processing are being developed or modified from other industries. In addition, many process controls are being automated to achieve better quality and to gain higher use (soldier) acceptance. IPE applications in military apparel include: in-process quality controls for identification of sewing defects, use of robots in the manufacture of shirt collars, and automated handling of garments for pressing.
E-Services quality assessment framework for collaborative networks
NASA Astrophysics Data System (ADS)
Stegaru, Georgiana; Danila, Cristian; Sacala, Ioan Stefan; Moisescu, Mihnea; Mihai Stanescu, Aurelian
2015-08-01
In a globalised networked economy, collaborative networks (CNs) are formed to take advantage of new business opportunities. Collaboration involves shared resources and capabilities, such as e-Services that can be dynamically composed to automate CN participants' business processes. Quality is essential for the success of business process automation. Current approaches mostly focus on quality of service (QoS)-based service selection and ranking algorithms, overlooking the process of service composition which requires interoperable, adaptable and secure e-Services to ensure seamless collaboration, data confidentiality and integrity. Lack of assessment of these quality attributes can result in e-Service composition failure. The quality of e-Service composition relies on the quality of each e-Service and on the quality of the composition process. Therefore, there is the need for a framework that addresses quality from both views: product and process. We propose a quality of e-Service composition (QoESC) framework for quality assessment of e-Service composition for CNs which comprises of a quality model for e-Service evaluation and guidelines for quality of e-Service composition process. We implemented a prototype considering a simplified telemedicine use case which involves a CN in e-Healthcare domain. To validate the proposed quality-driven framework, we analysed service composition reliability with and without using the proposed framework.
Mboya, Dominick; Mshana, Christopher; Kessy, Flora; Alba, Sandra; Lengeler, Christian; Renggli, Sabine; Vander Plaetse, Bart; Mohamed, Mohamed A; Schulze, Alexander
2016-10-13
Assessing quality of health services, for example through supportive supervision, is essential for strengthening healthcare delivery. Most systematic health facility assessment mechanisms, however, are not suitable for routine supervision. The objective of this study is to describe a quality assessment methodology using an electronic format that can be embedded in supervision activities and conducted by council health staff. An electronic Tool to Improve Quality of Healthcare (e-TIQH) was developed to assess the quality of primary healthcare provision. The e-TIQH contains six sub-tools, each covering one quality dimension: infrastructure and equipment of the facility, its management and administration, job expectations, clinical skills of the staff, staff motivation and client satisfaction. As part of supportive supervision, council health staff conduct quality assessments in all primary healthcare facilities in a given council, including observation of clinical consultations and exit interviews with clients. Using a hand-held device, assessors enter data and view results in real time through automated data analysis, permitting immediate feedback to health workers. Based on the results, quality gaps and potential measures to address them are jointly discussed and actions plans developed. For illustrative purposes, preliminary findings from e-TIQH application are presented from eight councils of Tanzania for the period 2011-2013, with a quality score <75 % classed as 'unsatisfactory'. Staff motivation (<50 % in all councils) and job expectations (≤50 %) scored lowest of all quality dimensions at baseline. Clinical practice was unsatisfactory in six councils, with more mixed results for availability of infrastructure and equipment, and for administration and management. In contrast, client satisfaction scored surprisingly high. Over time, each council showed a significant overall increase of 3-7 % in mean score, with the most pronounced improvements in staff motivation and job expectations. Given its comprehensiveness, convenient handling and automated statistical reports, e-TIQH enables council health staff to conduct systematic quality assessments. Therefore e-TIQH may not only contribute to objectively identifying quality gaps, but also to more evidence-based supervision. E-TIQH also provides important information for resource planning. Institutional and financial challenges for implementing e-TIQH on a broader scale need to be addressed.
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Rigo, Vincent; Graas, Estelle; Rigo, Jacques
2012-07-01
Selected optimal respiratory cycles should allow calculation of respiratory mechanic parameters focusing on patient-ventilator interaction. New computer software automatically selecting optimal breaths and respiratory mechanics derived from those cycles are evaluated. Retrospective study. University level III neonatal intensive care unit. Ten mins synchronized intermittent mandatory ventilation and assist/control ventilation recordings from ten newborns. The ventilator provided respiratory mechanic data (ventilator respiratory cycles) every 10 secs. Pressure, flow, and volume waves and pressure-volume, pressure-flow, and volume-flow loops were reconstructed from continuous pressure-volume recordings. Visual assessment determined assisted leak-free optimal respiratory cycles (selected respiratory cycles). New software graded the quality of cycles (automated respiratory cycles). Respiratory mechanic values were derived from both sets of optimal cycles. We evaluated quality selection and compared mean values and their variability according to ventilatory mode and respiratory mechanic provenance. To assess discriminating power, all 45 "t" values obtained from interpatient comparisons were compared for each respiratory mechanic parameter. A total of 11,724 breaths are evaluated. Automated respiratory cycle/selected respiratory cycle selections agreement is high: 88% of maximal κ with linear weighting. Specificity and positive predictive values are 0.98 and 0.96, respectively. Averaged values are similar between automated respiratory cycle and ventilator respiratory cycle. C20/C alone is markedly decreased in automated respiratory cycle (1.27 ± 0.37 vs. 1.81 ± 0.67). Tidal volume apparent similarity disappears in assist/control: automated respiratory cycle tidal volume (4.8 ± 1.0 mL/kg) is significantly lower than for ventilator respiratory cycle (5.6 ± 1.8 mL/kg). Coefficients of variation decrease for all automated respiratory cycle parameters in all infants. "t" values from ventilator respiratory cycle data are two to three times higher than ventilator respiratory cycles. Automated selection is highly specific. Automated respiratory cycle reflects most the interaction of both ventilator and patient. Improving discriminating power of ventilator monitoring will likely help in assessing disease status and following trends. Averaged parameters derived from automated respiratory cycles are more precise and could be displayed by ventilators to improve real-time fine tuning of ventilator settings.
Howat, William J; Daley, Frances; Zabaglo, Lila; McDuffus, Leigh‐Anne; Blows, Fiona; Coulson, Penny; Raza Ali, H; Benitez, Javier; Milne, Roger; Brenner, Herman; Stegmaier, Christa; Mannermaa, Arto; Chang‐Claude, Jenny; Rudolph, Anja; Sinn, Peter; Couch, Fergus J; Tollenaar, Rob A.E.M.; Devilee, Peter; Figueroa, Jonine; Sherman, Mark E; Lissowska, Jolanta; Hewitt, Stephen; Eccles, Diana; Hooning, Maartje J; Hollestelle, Antoinette; WM Martens, John; HM van Deurzen, Carolien; Investigators, kConFab; Bolla, Manjeet K; Wang, Qin; Jones, Michael; Schoemaker, Minouk; Broeks, Annegien; van Leeuwen, Flora E; Van't Veer, Laura; Swerdlow, Anthony J; Orr, Nick; Dowsett, Mitch; Easton, Douglas; Schmidt, Marjanka K; Pharoah, Paul D; Garcia‐Closas, Montserrat
2016-01-01
Abstract Automated methods are needed to facilitate high‐throughput and reproducible scoring of Ki67 and other markers in breast cancer tissue microarrays (TMAs) in large‐scale studies. To address this need, we developed an automated protocol for Ki67 scoring and evaluated its performance in studies from the Breast Cancer Association Consortium. We utilized 166 TMAs containing 16,953 tumour cores representing 9,059 breast cancer cases, from 13 studies, with information on other clinical and pathological characteristics. TMAs were stained for Ki67 using standard immunohistochemical procedures, and scanned and digitized using the Ariol system. An automated algorithm was developed for the scoring of Ki67, and scores were compared to computer assisted visual (CAV) scores in a subset of 15 TMAs in a training set. We also assessed the correlation between automated Ki67 scores and other clinical and pathological characteristics. Overall, we observed good discriminatory accuracy (AUC = 85%) and good agreement (kappa = 0.64) between the automated and CAV scoring methods in the training set. The performance of the automated method varied by TMA (kappa range= 0.37–0.87) and study (kappa range = 0.39–0.69). The automated method performed better in satisfactory cores (kappa = 0.68) than suboptimal (kappa = 0.51) cores (p‐value for comparison = 0.005); and among cores with higher total nuclei counted by the machine (4,000–4,500 cells: kappa = 0.78) than those with lower counts (50–500 cells: kappa = 0.41; p‐value = 0.010). Among the 9,059 cases in this study, the correlations between automated Ki67 and clinical and pathological characteristics were found to be in the expected directions. Our findings indicate that automated scoring of Ki67 can be an efficient method to obtain good quality data across large numbers of TMAs from multicentre studies. However, robust algorithm development and rigorous pre‐ and post‐analytical quality control procedures are necessary in order to ensure satisfactory performance. PMID:27499923
Cho, Yu Kyung; Moon, Jeong Seop; Han, Dong Su; Lee, Yong Chan; Kim, Yeol; Park, Bo Young; Chung, Il-Kwun; Kim, Jin-Oh; Im, Jong Pil; Cha, Jae Myung; Kim, Hyun Gun; Lee, Sang Kil; Lee, Hang Lak; Jang, Jae Young; Kim, Eun Sun; Jung, Yunho; Moon, Chang Mo
2016-11-01
In Korea, the nationwide gastric cancer screening program recommends biennial screening for individuals aged 40 years or older by way of either an upper gastrointestinal series or endoscopy. The national endoscopic quality assessment (QA) program began recommending endoscopy in medical institutions in 2009. We aimed to assess the effect, burden, and cost of the QA program from the viewpoint of medical institutions. We surveyed the staff of institutional endoscopic units via e-mail. Staff members from 67 institutions replied. Most doctors were endoscopic specialists. They responded as to whether the QA program raised awareness for endoscopic quality (93%) or improved endoscopic practice (40%). The percentages of responders who reported improvements in the diagnosis of gastric cancer, the qualifications of endoscopists, the quality of facilities and equipment, endoscopic procedure, and endoscopic reprocessing were 69%, 60%, 66%, 82%, and 75%, respectively. Regarding reprocessing, many staff members reported that they had bought new automated endoscopic preprocessors (3%), used more disinfectants (34%), washed endoscopes longer (28%), reduced the number of endoscopies performed to adhere to reprocessing guidelines (9%), and created their own quality education programs (59%). Many responders said they felt that QA was associated with some degree of burden (48%), especially financial burden caused by purchasing new equipment. Reasonable quality standards (45%) and incentives (38%) were considered important to the success of the QA program. Endoscopic quality has improved after 5 years of the mandatory endoscopic QA program.
Automated standardization technique for an inductively-coupled plasma emission spectrometer
Garbarino, John R.; Taylor, Howard E.
1982-01-01
The manifold assembly subsystem described permits real-time computer-controlled standardization and quality control of a commercial inductively-coupled plasma atomic emission spectrometer. The manifold assembly consists of a branch-structured glass manifold, a series of microcomputer-controlled solenoid valves, and a reservoir for each standard. Automated standardization involves selective actuation of each solenoid valve that permits a specific mixed standard solution to be pumped to the nebulizer of the spectrometer. Quality control is based on the evaluation of results obtained for a mixed standard containing 17 analytes, that is measured periodically with unknown samples. An inaccurate standard evaluation triggers restandardization of the instrument according to a predetermined protocol. Interaction of the computer-controlled manifold assembly hardware with the spectrometer system is outlined. Evaluation of the automated standardization system with respect to reliability, simplicity, flexibility, and efficiency is compared to the manual procedure. ?? 1982.
Mansberger, Steven L; Menda, Shivali A; Fortune, Brad A; Gardiner, Stuart K; Demirel, Shaban
2017-02-01
To characterize the error of optical coherence tomography (OCT) measurements of retinal nerve fiber layer (RNFL) thickness when using automated retinal layer segmentation algorithms without manual refinement. Cross-sectional study. This study was set in a glaucoma clinical practice, and the dataset included 3490 scans from 412 eyes of 213 individuals with a diagnosis of glaucoma or glaucoma suspect. We used spectral domain OCT (Spectralis) to measure RNFL thickness in a 6-degree peripapillary circle, and exported the native "automated segmentation only" results. In addition, we exported the results after "manual refinement" to correct errors in the automated segmentation of the anterior (internal limiting membrane) and the posterior boundary of the RNFL. Our outcome measures included differences in RNFL thickness and glaucoma classification (i.e., normal, borderline, or outside normal limits) between scans with automated segmentation only and scans using manual refinement. Automated segmentation only resulted in a thinner global RNFL thickness (1.6 μm thinner, P < .001) when compared to manual refinement. When adjusted by operator, a multivariate model showed increased differences with decreasing RNFL thickness (P < .001), decreasing scan quality (P < .001), and increasing age (P < .03). Manual refinement changed 298 of 3486 (8.5%) of scans to a different global glaucoma classification, wherein 146 of 617 (23.7%) of borderline classifications became normal. Superior and inferior temporal clock hours had the largest differences. Automated segmentation without manual refinement resulted in reduced global RNFL thickness and overestimated the classification of glaucoma. Differences increased in eyes with a thinner RNFL thickness, older age, and decreased scan quality. Operators should inspect and manually refine OCT retinal layer segmentation when assessing RNFL thickness in the management of patients with glaucoma. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, J; Shi, F; Hrycushko, B
2015-06-15
Purpose: For tandem and ovoid (T&O) HDR brachytherapy in our clinic, it is required that the planning physicist manually capture ∼10 images during planning, perform a secondary dose calculation and generate a report, combine them into a single PDF document, and upload it to a record- and-verify system to prove to an independent plan checker that the case was planned correctly. Not only does this slow down the already time-consuming clinical workflow, the PDF document also limits the number of parameters that can be checked. To solve these problems, we have developed a web-based automatic quality assurance (QA) program. Methods:more » We set up a QA server accessible through a web- interface. A T&O plan and CT images are exported as DICOMRT files and uploaded to the server. The software checks 13 geometric features, e.g. if the dwell positions are reasonable, and 10 dosimetric features, e.g. secondary dose calculations via TG43 formalism and D2cc to critical structures. A PDF report is automatically generated with errors and potential issues highlighted. It also contains images showing important geometric and dosimetric aspects to prove the plan was created following standard guidelines. Results: The program has been clinically implemented in our clinic. In each of the 58 T&O plans we tested, a 14- page QA report was automatically generated. It took ∼45 sec to export the plan and CT images and ∼30 sec to perform the QA tests and generate the report. In contrast, our manual QA document preparation tooks on average ∼7 minutes under optimal conditions and up to 20 minutes when mistakes were made during the document assembly. Conclusion: We have tested the efficiency and effectiveness of an automated process for treatment plan QA of HDR T&O cases. This software was shown to improve the workflow compared to our conventional manual approach.« less
Jasuja, Guneet K; Reisman, Joel I; Miller, Donald R; Berlowitz, Dan R; Hylek, Elaine M; Ash, Arlene S; Ozonoff, Al; Zhao, Shibei; Rose, Adam J
2013-01-01
Identifying major bleeding is fundamental to assessing the outcomes of anticoagulation therapy. This drives the need for a credible implementation in automated data for the International Society of Thrombosis and Haemostasis (ISTH) definition of major bleeding. We studied 102,395 patients who received 158,511 person-years of warfarin treatment from the Veterans Health Administration (VA) between 10/1/06-9/30/08. We constructed a list of ICD-9-CM codes of "candidate" bleeding events. Each candidate event was identified as a major hemorrhage if it fulfilled one of four criteria: 1) associated with death within 30days; 2) bleeding in a critical anatomic site; 3) associated with a transfusion; or 4) was coded as the event that precipitated or was responsible for the majority of an inpatient hospitalization. This definition classified 11,240 (15.8%) of 71, 338 candidate events as major hemorrhage. Typically, events more likely to be severe were retained at higher rates than those less likely to be severe. For example, Diverticula of Colon with Hemorrhage (562.12) and Hematuria (599.7) were retained 46% and 4% of the time, respectively. Major, intracranial, and fatal hemorrhage were identified at rates comparable to those found in randomized clinical trials however, higher than those reported in observational studies: 4.73, 1.29, and 0.41 per 100 patient years, respectively. We describe here a workable definition for identifying major hemorrhagic events from large automated datasets. This method of identifying major bleeding may have applications for quality measurement, quality improvement, and comparative effectiveness research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lacson, Ronilda; O'Connor, Stacy D; Andriole, Katherine P; Prevedello, Luciano M; Khorasani, Ramin
2014-11-01
Communicating critical results of diagnostic imaging procedures is a national patient safety goal. The purposes of this study were to describe the system architecture and design of Alert Notification of Critical Results (ANCR), an automated system designed to facilitate communication of critical imaging results between care providers; to report providers' satisfaction with ANCR; and to compare radiologists' and ordering providers' attitudes toward ANCR. The design decisions made for each step in the alert communication process, which includes user authentication, alert creation, alert communication, alert acknowledgment and management, alert reminder and escalation, and alert documentation, are described. To assess attitudes toward ANCR, internally developed and validated surveys were administered to all radiologists (n = 320) and ordering providers (n = 4323) who sent or received alerts 3 years after ANCR implementation. The survey response rates were 50.4% for radiologists and 36.1% for ordering providers. Ordering providers were generally dissatisfied with the training received for use of ANCR and with access to technical support. Radiologists were more satisfied with documenting critical result communication (61.1% vs 43.2%; p = 0.0001) and tracking critical results (51.6% vs 35.1%; p = 0.0003) than were ordering providers. Both groups agreed use of ANCR reduces medical errors and improves the quality of patient care. Use of ANCR enables automated communication of critical test results. The survey results confirm overall provider satisfaction with ANCR but highlight the need for improved training strategies for large numbers of geographically dispersed ordering providers. Future enhancements beyond acknowledging receipt of critical results are needed to help ensure timely and appropriate follow-up of critical results to improve quality and patient safety.
Lacson, Ronilda; O'Connor, Stacy D.; Andriole, Katherine P.; Prevedello, Luciano M.; Khorasani, Ramin
2015-01-01
OBJECTIVE Communicating critical results of diagnostic imaging procedures is a national patient safety goal. The purposes of this study were to describe the system architecture and design of Alert Notification of Critical Results (ANCR), an automated system designed to facilitate communication of critical imaging results between care providers; to report providers’ satisfaction with ANCR; and to compare radiologists’ and ordering providers’ attitudes toward ANCR. MATERIALS AND METHODS The design decisions made for each step in the alert communication process, which includes user authentication, alert creation, alert communication, alert acknowledgment and management, alert reminder and escalation, and alert documentation, are described. To assess attitudes toward ANCR, internally developed and validated surveys were administered to all radiologists (n = 320) and ordering providers (n = 4323) who sent or received alerts 3 years after ANCR implementation. RESULTS The survey response rates were 50.4% for radiologists and 36.1% for ordering providers. Ordering providers were generally dissatisfied with the training received for use of ANCR and with access to technical support. Radiologists were more satisfied with documenting critical result communication (61.1% vs 43.2%; p = 0.0001) and tracking critical results (51.6% vs 35.1%; p = 0.0003) than were ordering providers. Both groups agreed use of ANCR reduces medical errors and improves the quality of patient care. CONCLUSION Use of ANCR enables automated communication of critical test results. The survey results confirm overall provider satisfaction with ANCR but highlight the need for improved training strategies for large numbers of geographically dispersed ordering providers. Future enhancements beyond acknowledging receipt of critical results are needed to help ensure timely and appropriate follow-up of critical results to improve quality and patient safety. PMID:25341163
Integrating automated support for a software management cycle into the TAME system
NASA Technical Reports Server (NTRS)
Sunazuka, Toshihiko; Basili, Victor R.
1989-01-01
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.
ERIC Educational Resources Information Center
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark
2018-01-01
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Biomek Cell Workstation: A Variable System for Automated Cell Cultivation.
Lehmann, R; Severitt, J C; Roddelkopf, T; Junginger, S; Thurow, K
2016-06-01
Automated cell cultivation is an important tool for simplifying routine laboratory work. Automated methods are independent of skill levels and daily constitution of laboratory staff in combination with a constant quality and performance of the methods. The Biomek Cell Workstation was configured as a flexible and compatible system. The modified Biomek Cell Workstation enables the cultivation of adherent and suspension cells. Until now, no commercially available systems enabled the automated handling of both types of cells in one system. In particular, the automated cultivation of suspension cells in this form has not been published. The cell counts and viabilities were nonsignificantly decreased for cells cultivated in AutoFlasks in automated handling. The proliferation of manual and automated bioscreening by the WST-1 assay showed a nonsignificant lower proliferation of automatically disseminated cells associated with a mostly lower standard error. The disseminated suspension cell lines showed different pronounced proliferations in descending order, starting with Jurkat cells followed by SEM, Molt4, and RS4 cells having the lowest proliferation. In this respect, we successfully disseminated and screened suspension cells in an automated way. The automated cultivation and dissemination of a variety of suspension cells can replace the manual method. © 2015 Society for Laboratory Automation and Screening.
Proceedings of the Second Texas Conference on Library Automation (Houston, March 27, 1969).
ERIC Educational Resources Information Center
Corbin, John B., Ed.
Four papers are included in these proceedings. The first three discuss specific on-going programs, including details of operation: (1) "Automation of Serials," by Shula Schwartz and Patricia A. Bottalico, reports a serials records automation at Texas Instruments Inc., Dallas, Texas; (2) "From Texana to Real-Time Automation," by…
DOT National Transportation Integrated Search
2014-07-01
Within the context of automation Levels 2 and 3, this report documents the proceedings from a literature review of key : human factors studies that was performed related to automated vehicle operations. This document expands and updates : the results...
ERIC Educational Resources Information Center
Karp, William
The 74th Illinois General Assembly created the Illinois Commission on Automation and Technological Progress to study and analyze the economic and social effects of automation and other technological changes on industry, commerce, agriculture, education, manpower, and society in Illinois. Commission members visited industrial plants and business…
Automated enforcement : a compendium of worldwide evaluations of results
DOT National Transportation Integrated Search
2007-03-14
Powerpoint presentation of the report "Automated enforcement : a compendium of worldwide evaluations of results". This compendium details automated enforcement systems (AES) implemented around the world and characterizes the safety impacts of such de...
Kim, Brian J; Merchant, Madhur; Zheng, Chengyi; Thomas, Anil A; Contreras, Richard; Jacobsen, Steven J; Chien, Gary W
2014-12-01
Natural language processing (NLP) software programs have been widely developed to transform complex free text into simplified organized data. Potential applications in the field of medicine include automated report summaries, physician alerts, patient repositories, electronic medical record (EMR) billing, and quality metric reports. Despite these prospects and the recent widespread adoption of EMR, NLP has been relatively underutilized. The objective of this study was to evaluate the performance of an internally developed NLP program in extracting select pathologic findings from radical prostatectomy specimen reports in the EMR. An NLP program was generated by a software engineer to extract key variables from prostatectomy reports in the EMR within our healthcare system, which included the TNM stage, Gleason grade, presence of a tertiary Gleason pattern, histologic subtype, size of dominant tumor nodule, seminal vesicle invasion (SVI), perineural invasion (PNI), angiolymphatic invasion (ALI), extracapsular extension (ECE), and surgical margin status (SMS). The program was validated by comparing NLP results to a gold standard compiled by two blinded manual reviewers for 100 random pathology reports. NLP demonstrated 100% accuracy for identifying the Gleason grade, presence of a tertiary Gleason pattern, SVI, ALI, and ECE. It also demonstrated near-perfect accuracy for extracting histologic subtype (99.0%), PNI (98.9%), TNM stage (98.0%), SMS (97.0%), and dominant tumor size (95.7%). The overall accuracy of NLP was 98.7%. NLP generated a result in <1 second, whereas the manual reviewers averaged 3.2 minutes per report. This novel program demonstrated high accuracy and efficiency identifying key pathologic details from the prostatectomy report within an EMR system. NLP has the potential to assist urologists by summarizing and highlighting relevant information from verbose pathology reports. It may also facilitate future urologic research through the rapid and automated creation of large databases.
DOT National Transportation Integrated Search
2000-05-01
This report presents the procedures involved in the research, design, construction, and testing of an Automated Road Closure Gate. The current road closure gates used in South Dakota are often unsafe and difficult to operate. This report will assist ...
Automated verification of flight software. User's manual
NASA Technical Reports Server (NTRS)
Saib, S. H.
1982-01-01
(Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.
Automated Absorber Attachment for X-ray Microcalorimeter Arrays
NASA Technical Reports Server (NTRS)
Moseley, S.; Allen, Christine; Kilbourne, Caroline; Miller, Timothy M.; Costen, Nick; Schulte, Eric; Moseley, Samuel J.
2007-01-01
Our goal is to develop a method for the automated attachment of large numbers of absorber tiles to large format detector arrays. This development includes the fabrication of high quality, closely spaced HgTe absorber tiles that are properly positioned for pick-and-place by our FC150 flip chip bonder. The FC150 also transfers the appropriate minute amount of epoxy to the detectors for permanent attachment of the absorbers. The success of this development will replace an arduous, risky and highly manual task with a reliable, high-precision automated process.
Russi, Silvia; Song, Jinhu; McPhillips, Scott E.; ...
2016-02-24
The Stanford Automated Mounter System, a system for mounting and dismounting cryo-cooled crystals, has been upgraded to increase the throughput of samples on the macromolecular crystallography beamlines at the Stanford Synchrotron Radiation Lightsource. This upgrade speeds up robot maneuvers, reduces the heating/drying cycles, pre-fetches samples and adds an air-knife to remove frost from the gripper arms. As a result, sample pin exchange during automated crystal quality screening now takes about 25 s, five times faster than before this upgrade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russi, Silvia; Song, Jinhu; McPhillips, Scott E.
The Stanford Automated Mounter System, a system for mounting and dismounting cryo-cooled crystals, has been upgraded to increase the throughput of samples on the macromolecular crystallography beamlines at the Stanford Synchrotron Radiation Lightsource. This upgrade speeds up robot maneuvers, reduces the heating/drying cycles, pre-fetches samples and adds an air-knife to remove frost from the gripper arms. As a result, sample pin exchange during automated crystal quality screening now takes about 25 s, five times faster than before this upgrade.
Impact of Automation on Drivers' Performance in Agricultural Semi-Autonomous Vehicles.
Bashiri, B; Mann, D D
2015-04-01
Drivers' inadequate mental workload has been reported as one of the negative effects of driving assistant systems and in-vehicle automation. The increasing trend of automation in agricultural vehicles raises some concerns about drivers' mental workload in such vehicles. Thus, a human factors perspective is needed to identify the consequences of such automated systems. In this simulator study, the effects of vehicle steering task automation (VSTA) and implement control and monitoring task automation (ICMTA) were investigated using a tractor-air seeder system as a case study. Two performance parameters (reaction time and accuracy of actions) were measured to assess drivers' perceived mental workload. Experiments were conducted using the tractor driving simulator (TDS) located in the Agricultural Ergonomics Laboratory at the University of Manitoba. Study participants were university students with tractor driving experience. According to the results, reaction time and number of errors made by drivers both decreased as the automation level increased. Correlations were found among performance parameters and subjective mental workload reported by the drivers.
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…
Automated monitoring of recovered water quality
NASA Technical Reports Server (NTRS)
Misselhorn, J. E.; Hartung, W. H.; Witz, S. W.
1974-01-01
Laboratory prototype water quality monitoring system provides automatic system for online monitoring of chemical, physical, and bacteriological properties of recovered water and for signaling malfunction in water recovery system. Monitor incorporates whenever possible commercially available sensors suitably modified.
Automated extraction of radiation dose information from CT dose report images.
Li, Xinhua; Zhang, Da; Liu, Bob
2011-06-01
The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.
Quality control and quality assurance plan for bridge channel-stability assessments in Massachusetts
Parker, Gene W.; Pinson, Harlow
1993-01-01
A quality control and quality assurance plan has been implemented as part of the Massachusetts bridge scour and channel-stability assessment program. This program is being conducted by the U.S. Geological Survey, Massachusetts-Rhode Island District, in cooperation with the Massachusetts Highway Department. Project personnel training, data-integrity verification, and new data-management technologies are being utilized in the channel-stability assessment process to improve current data-collection and management techniques. An automated data-collection procedure has been implemented to standardize channel-stability assessments on a regular basis within the State. An object-oriented data structure and new image management tools are used to produce a data base enabling management of multiple data object classes. Data will be reviewed by assessors and data base managers before being merged into a master bridge-scour data base, which includes automated data-verification routines.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
ERIC Educational Resources Information Center
Bradley, Lucy K.; Cook, Jonneen; Cook, Chris
2011-01-01
North Carolina State University has incorporated many aspects of volunteer program administration and reporting into an on-line solution that integrates impact reporting into daily program management. The Extension Master Gardener Intranet automates many of the administrative tasks associated with volunteer management, increasing efficiency, and…
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958
Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz
2012-01-01
The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.
Lee, Myungeun; Woo, Boyeong; Kuo, Michael D.; Jamshidi, Neema
2017-01-01
Objective The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. Materials and Methods MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Results Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. Conclusion The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics. PMID:28458602
Lee, Myungeun; Woo, Boyeong; Kuo, Michael D; Jamshidi, Neema; Kim, Jong Hyo
2017-01-01
The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.
Quality Controlling CMIP datasets at GFDL
NASA Astrophysics Data System (ADS)
Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.
2017-12-01
As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.
NASA Technical Reports Server (NTRS)
Thompson David S.; Soni, Bharat K.
2001-01-01
An integrated geometry/grid/simulation software package, ICEG2D, is being developed to automate computational fluid dynamics (CFD) simulations for single- and multi-element airfoils with ice accretions. The current version, ICEG213 (v2.0), was designed to automatically perform four primary functions: (1) generate a grid-ready surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generate high-quality structured and generalized grids starting from a defined surface definition, (3) generate the input and restart files needed to run the structured grid CFD solver NPARC or the generalized grid CFD solver HYBFL2D, and (4) using the flow solutions, generate solution-adaptive grids. ICEG2D (v2.0) can be operated in either a batch mode using a script file or in an interactive mode by entering directives from a command line within a Unix shell. This report summarizes activities completed in the first two years of a three-year research and development program to address automation issues related to CFD simulations for airfoils with ice accretions. As well as describing the technology employed in the software, this document serves as a users manual providing installation and operating instructions. An evaluation of the software is also presented.
James, Matthew T; Hobson, Charles E; Darmon, Michael; Mohan, Sumit; Hudson, Darren; Goldstein, Stuart L; Ronco, Claudio; Kellum, John A; Bagshaw, Sean M
2016-01-01
Electronic medical records and clinical information systems are increasingly used in hospitals and can be leveraged to improve recognition and care for acute kidney injury. This Acute Dialysis Quality Initiative (ADQI) workgroup was convened to develop consensus around principles for the design of automated AKI detection systems to produce real-time AKI alerts using electronic systems. AKI alerts were recognized by the workgroup as an opportunity to prompt earlier clinical evaluation, further testing and ultimately intervention, rather than as a diagnostic label. Workgroup members agreed with designing AKI alert systems to align with the existing KDIGO classification system, but recommended future work to further refine the appropriateness of AKI alerts and to link these alerts to actionable recommendations for AKI care. The consensus statements developed in this review can be used as a roadmap for development of future electronic applications for automated detection and reporting of AKI.
Design And Implementation Of Integrated Vision-Based Robotic Workcells
NASA Astrophysics Data System (ADS)
Chen, Michael J.
1985-01-01
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
An Automated HIV-1 Env-Pseudotyped Virus Production for Global HIV Vaccine Trials
Fuss, Martina; Mazzotta, Angela S.; Sarzotti-Kelsoe, Marcella; Ozaki, Daniel A.; Montefiori, David C.; von Briesen, Hagen; Zimmermann, Heiko; Meyerhans, Andreas
2012-01-01
Background Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. Methodology/Principal Findings The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays) confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP) guidelines, including the validation parameters accuracy, precision, robustness and specificity. Conclusions An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell culture supernatant per week. Thus, this novel automation facilitates standardized large-scale productions of HIV pseudoviruses for ongoing and upcoming HIV vaccine trials. PMID:23300558
ChiLin: a comprehensive ChIP-seq and DNase-seq quality control and analysis pipeline.
Qin, Qian; Mei, Shenglin; Wu, Qiu; Sun, Hanfei; Li, Lewyn; Taing, Len; Chen, Sujun; Li, Fugen; Liu, Tao; Zang, Chongzhi; Xu, Han; Chen, Yiwen; Meyer, Clifford A; Zhang, Yong; Brown, Myles; Long, Henry W; Liu, X Shirley
2016-10-03
Transcription factor binding, histone modification, and chromatin accessibility studies are important approaches to understanding the biology of gene regulation. ChIP-seq and DNase-seq have become the standard techniques for studying protein-DNA interactions and chromatin accessibility respectively, and comprehensive quality control (QC) and analysis tools are critical to extracting the most value from these assay types. Although many analysis and QC tools have been reported, few combine ChIP-seq and DNase-seq data analysis and quality control in a unified framework with a comprehensive and unbiased reference of data quality metrics. ChiLin is a computational pipeline that automates the quality control and data analyses of ChIP-seq and DNase-seq data. It is developed using a flexible and modular software framework that can be easily extended and modified. ChiLin is ideal for batch processing of many datasets and is well suited for large collaborative projects involving ChIP-seq and DNase-seq from different designs. ChiLin generates comprehensive quality control reports that include comparisons with historical data derived from over 23,677 public ChIP-seq and DNase-seq samples (11,265 datasets) from eight literature-based classified categories. To the best of our knowledge, this atlas represents the most comprehensive ChIP-seq and DNase-seq related quality metric resource currently available. These historical metrics provide useful heuristic quality references for experiment across all commonly used assay types. Using representative datasets, we demonstrate the versatility of the pipeline by applying it to different assay types of ChIP-seq data. The pipeline software is available open source at https://github.com/cfce/chilin . ChiLin is a scalable and powerful tool to process large batches of ChIP-seq and DNase-seq datasets. The analysis output and quality metrics have been structured into user-friendly directories and reports. We have successfully compiled 23,677 profiles into a comprehensive quality atlas with fine classification for users.
Saikali, Melody; Tanios, Alain; Saab, Antoine
2017-11-21
The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.
Automated assessment of cognitive health using smart home technologies.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen; Parsey, Carolyn
2013-01-01
The goal of this work is to develop intelligent systems to monitor the wellbeing of individuals in their home environments. This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve=0.80, g-mean=0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained.
Automated Assessment of Cognitive Health Using Smart Home Technologies
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen; Parsey, Carolyn
2014-01-01
BACKGROUND The goal of this work is to develop intelligent systems to monitor the well being of individuals in their home environments. OBJECTIVE This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. METHODS This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. RESULTS Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve = 0.80, g-mean = 0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. CONCLUSIONS The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained. PMID:23949177
Computer program CDCID: an automated quality control program using CDC update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, G.L.; Aguilar, F.
1984-04-01
A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less
Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul
2016-09-29
The generation of Everglades Depth Estimation Network (EDEN) daily water-level and water-depth maps is dependent on high quality real-time data from over 240 water-level stations. To increase the accuracy of the daily water-surface maps, the Automated Data Assurance and Management (ADAM) tool was created by the U.S. Geological Survey as part of Greater Everglades Priority Ecosystems Science. The ADAM tool is used to provide accurate quality-assurance review of the real-time data from the EDEN network and allows estimation or replacement of missing or erroneous data. This user’s manual describes how to install and operate the ADAM software. File structure and operation of the ADAM software is explained using examples.
Protocols for Automated Protist Analysis
2011-12-01
Report No: CG-D-14-13 Protocols for Automated Protist Analysis December 2011 Distribution Statement A: Approved for public...release; distribution is unlimited. Protocols for Automated Protist Analysis ii UNCLAS//Public | CG-926 RDC | B. Nelson, et al. | Public...Director United States Coast Guard Research & Development Center 1 Chelsea Street New London, CT 06320 Protocols for Automated Protist Analysis
ERIC Educational Resources Information Center
Epstein, A. H.; And Others
The first phase of an ongoing library automation project at Stanford University is described. Project BALLOTS (Bibliographic Automation of Large Library Operations Using a Time-Sharing System) seeks to automate the acquisition and cataloging functions of a large library using an on-line time-sharing computer. The main objectives are to control…
Automation of Acquisition Records and Routine in the University Library, Newcastle upon Tyne
ERIC Educational Resources Information Center
Line, Maurice B.
2006-01-01
Purpose: Reports on the trial of an automated order routine for the University Library in Newcastle which began in April 1966. Design/methodology/approach: Presents the author's experiences of the manual order processing system, and the impetus for trialling an automated system. The stages of the automated system are described in detail. Findings:…
Feasibility Study for an Automated Library System. Final Report.
ERIC Educational Resources Information Center
Beaumont and Associates, Inc.
This study was initiated by the Newfoundland Public Library Services (NPLS) to assess the feasibility of automation for the library services and to determine the viability of an integrated automated library system for the NPLS. The study addresses the needs of NPLS in terms of library automation; benefits to be achieved through the introduction of…
Advancing automation and robotics technology for the Space Station Freedom and for the U.S. economy
NASA Technical Reports Server (NTRS)
Lum, Henry, Jr.
1992-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the fifteenth in a series of progress updates and covers the period between 27 Feb. - 17 Sep. 1992. The progress made by Levels 1, 2, and 3 of the Space Station Freedom in developing and applying advanced automation and robotics technology is described. Emphasis was placed upon the Space Station Freedom program responses to specific recommendations made in ATAC Progress Report 14. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for Space Station Freedom.
NASA Technical Reports Server (NTRS)
Lum, Henry, Jr.
1991-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. The report describes the progress made by Levels 1, 2 and 3 of the Office Space Station in developing and applying advanced automation and robotics technology. Emphasis has been placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 11, the status of the Flight Telerobotic Servicer, and the status of the Advanced Development Program. In addition, an assessment is provided of the automation and robotics status of the Canadian Space Station Program.
Dewes, Patricia; Frellesen, Claudia; Scholtz, Jan-Erik; Fischer, Sebastian; Vogl, Thomas J; Bauer, Ralf W; Schulz, Boris
2016-06-01
To evaluate a novel tin filter-based abdominal CT protocol for urolithiasis in terms of image quality and CT dose parameters. 130 consecutive patients with suspected urolithiasis underwent non-enhanced CT with three different protocols: 48 patients (group 1) were examined at tin-filtered 150kV (150kV Sn) on a third-generation dual-source-CT, 33 patients were examined with automated kV-selection (110-140kV) based on the scout view on the same CT-device (group 2), and 49 patients were examined on a second-generation dual-source-CT (group 3) with automated kV-selection (100-140kV). Automated exposure control was active in all groups. Image quality was subjectively evaluated on a 5-point-likert-scale by two radiologists and interobserver agreement as well as signal-to-noise-ratio (SNR) was calculated. Dose-length-product (DLP) and volume CT dose index (CTDIvol) were compared. Image quality was rated in favour for the tin filter protocol with excellent interobserver agreement (ICC=0.86-0.91) and the difference reached statistical significance (p<0.001). SNR was significantly higher in group 1 and 2 compared to second-generation DSCT (p<0.001). On third-generation dual-source CT, there was no significant difference in SNR between the 150kV Sn and the automated kV selection protocol (p=0.5). The DLP of group 1 was 23% and 21% (p<0.002) lower in comparison to group 2 and 3, respectively. So was the CTDIvol of group 1 compared to group 2 (-36%) and 3 (-32%) (p<0.001). Additional shaping of a 150kV source spectrum by a tin filter substantially lowers patient exposure while improving image quality on un-enhanced abdominal computed tomography for urinary stone disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hirsch, Lior M; Wallace, Sarah K; Leary, Marion; Tucker, Kathryn D; Becker, Lance B; Abella, Benjamin S
2012-07-01
Access to automated external defibrillators and cardiopulmonary resuscitation (CPR) training are key determinants of cardiac arrest survival. State police officers represent an important class of cardiac arrest first responders responsible for the large network of highways in the United States. We seek to determine accessibility of automated external defibrillators and CPR training among state police agencies. Contact was attempted with all 50 state police agencies by telephone and electronic mail. Officers at each agency were guided to complete a 15-question Internet-based survey. Descriptive statistics of the responses were performed. Attempts were made to contact all 50 states, and 46 surveys were completed (92% response rate). Most surveys were filled out by police leadership or individuals responsible for medical programs. The median agency size was 725 (interquartile range 482 to 1,485) state police officers, with 695 (interquartile range 450 to 1,100) patrol vehicles ("squad cars"). Thirty-three percent of responding agencies (15/46) reported equipping police vehicles with automated external defibrillators. Of these, 53% (8/15) equipped less than half of their fleet with the devices. Regarding emergency medical training, 78% (35/45) of state police agencies reported training their officers in automated external defibrillator usage, and 98% (44/45) reported training them in CPR. One third of state police agencies surveyed equipped their vehicles with automated external defibrillators, and among those that did, most equipped only a minority of their fleet. Most state police agencies reported training their officers in automated external defibrillator usage and CPR. Increasing automated external defibrillator deployment among state police represents an important opportunity to improve first responder preparedness for cardiac arrest care. Copyright © 2012. Published by Mosby, Inc.
2014-07-01
Submoderating factors were examined and reported for human-related (i.e., age, cognitive factors, emotive factors) and automation- related (i.e., features and...capabilities) effects. Analyses were also conducted for type of automated aid: cognitive, control, and perceptual automation aids. Automated cognitive...operator, user) action. Perceptual aids are used to assist the operator or user by providing warnings or to assist with pattern recognition. All
Office Automation: A Look Beyond Word Processing.
1983-06-01
implying the type and nature of work performed by white-collar employees , and "products" will denote the techniques and type of equipment necessary...REPORT & PERIOD COVERED Office Automation: A Look Beyond Master’s Thesis Word Processing June 1983 6. PERFORMING ORG. REPORT NUMUER 7. AUTYOH) S...look at the problems of implementing an automated office and the possible impact it can have on human office workers. The purpose of this thesis is thus
NASA Astrophysics Data System (ADS)
Pastorello, G.; Agarwal, D.; Poindexter, C.; Papale, D.; Trotta, C.; Ribeca, A.; Canfora, E.; Faybishenko, B.; Gunter, D.; Chu, H.
2015-12-01
The fluxes-measuring sites that are part of AmeriFlux are operated and maintained in a fairly independent fashion, both in terms of scientific goals and operational practices. This is also the case for most sites from other networks in FLUXNET. This independence leads to a degree of heterogeneity in the data sets collected at the sites, which is also reflected in data quality levels. The generation of derived data products and data synthesis efforts, two of the main goals of these networks, are directly affected by the heterogeneity in data quality. In a collaborative effort between AmeriFlux and ICOS, a series of quality checks are being conducted for the data sets before any network-level data processing and product generation take place. From these checks, a set of common data issues were identified, and are being cataloged and classified into data quality patterns. These patterns are now being used as a basis for implementing automation for certain data quality checks, speeding up the process of applying the checks and evaluating the data. Currently, most data checks are performed individually in each data set, requiring visual inspection and inputs from a data curator. This manual process makes it difficult to scale the quality checks, creating a bottleneck for the data processing. One goal of the automated checks is to free up time of data curators so they can focus on new or less common issues. As new issues are identified, they can also be cataloged and classified, extending the coverage of existing patterns or potentially generating new patterns, helping both improve existing automated checks and create new ones. This approach is helping make data quality evaluation faster, more systematic, and reproducible. Furthermore, these patterns are also helping with documenting common causes and solutions for data problems. This can help tower teams with diagnosing problems in data collection and processing, and also in correcting historical data sets. In this presentation, using AmeriFlux fluxes and micrometeorological data, we discuss our approach to creating observational data patterns, and how we are using them to implement new automated checks. We also detail examples of these observational data patterns, illustrating how they are being used.
Assessment of Operational Automated Guideway Systems - Airtrans (Phase II)
DOT National Transportation Integrated Search
1980-01-01
This study, Phase II, completes the assessment of AIRTRANS, the automated guideway system located at the Dallas-Fort Worth Airport. The Phase I assessment report: "Assessment of Operational Automated Guideway Systems--AIRTRANS (Phase I)" (PB-261 339)...
Automation's Effect on Library Personnel.
ERIC Educational Resources Information Center
Dakshinamurti, Ganga
1985-01-01
Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…
Performance Evaluation of the UT Automated Road Maintenance Machine
DOT National Transportation Integrated Search
1997-10-01
This final report focuses mainly on evaluating the overall performance of The University of Texas' Automated Road Maintenance Machine (ARMM). It was concluded that the introduction of automated methods to the pavement crack-sealing process will impro...
Use of automated enforcement for red light violations
DOT National Transportation Integrated Search
1997-08-01
The use of automated enforcement systems offers the potential to decrease the number of red light violations and improve the safety of intersections. Included in this report are an evaluation of the operating conditions where automated enforcement wa...
Advancing automation and robotics technology for the Space Station Freedom and for the US economy
NASA Technical Reports Server (NTRS)
1990-01-01
In April 1985, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). The progress made by Levels 1, 2, and 3 of the Office of Space Station in developing and applying advanced automation and robotics technology are described. Emphasis was placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 9, the Flight Telerobotic Servicer, the Advanced Development Program, and the Data Management System. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for the Space Station Freedom.
Comparison of water-quality samples collected by siphon samplers and automatic samplers in Wisconsin
Graczyk, David J.; Robertson, Dale M.; Rose, William J.; Steur, Jeffrey J.
2000-01-01
In small streams, flow and water-quality concentrations often change quickly in response to meteorological events. Hydrologists, field technicians, or locally hired stream ob- servers involved in water-data collection are often unable to reach streams quickly enough to observe or measure these rapid changes. Therefore, in hydrologic studies designed to describe changes in water quality, a combination of manual and automated sampling methods have commonly been used manual methods when flow is relatively stable and automated methods when flow is rapidly changing. Auto- mated sampling, which makes use of equipment programmed to collect samples in response to changes in stage and flow of a stream, has been shown to be an effective method of sampling to describe the rapid changes in water quality (Graczyk and others, 1993). Because of the high cost of automated sampling, however, especially for studies examining a large number of sites, alternative methods have been considered for collecting samples during rapidly changing stream conditions. One such method employs the siphon sampler (fig. 1). also referred to as the "single-stage sampler." Siphon samplers are inexpensive to build (about $25- $50 per sampler), operate, and maintain, so they are cost effective to use at a large number of sites. Their ability to collect samples representing the average quality of water passing though the entire cross section of a stream, however, has not been fully demonstrated for many types of stream sites.
Peterfreund, Robert A; Driscoll, William D; Walsh, John L; Subramanian, Aparna; Anupama, Shaji; Weaver, Melissa; Morris, Theresa; Arnholz, Sarah; Zheng, Hui; Pierce, Eric T; Spring, Stephen F
2011-05-01
Efforts to assure high-quality, safe, clinical care depend upon capturing information about near-miss and adverse outcome events. Inconsistent or unreliable information capture, especially for infrequent events, compromises attempts to analyze events in quantitative terms, understand their implications, and assess corrective efforts. To enhance reporting, we developed a secure, electronic, mandatory system for reporting quality assurance data linked to our electronic anesthesia record. We used the capabilities of our anesthesia information management system (AIMS) in conjunction with internally developed, secure, intranet-based, Web application software. The application is implemented with a backend allowing robust data storage, retrieval, data analysis, and reporting capabilities. We customized a feature within the AIMS software to create a hard stop in the documentation workflow before the end of anesthesia care time stamp for every case. The software forces the anesthesia provider to access the separate quality assurance data collection program, which provides a checklist for targeted clinical events and a free text option. After completing the event collection program, the software automatically returns the clinician to the AIMS to finalize the anesthesia record. The number of events captured by the departmental quality assurance office increased by 92% (95% confidence interval [CI] 60.4%-130%) after system implementation. The major contributor to this increase was the new electronic system. This increase has been sustained over the initial 12 full months after implementation. Under our reporting criteria, the overall rate of clinical events reported by any method was 471 events out of 55,382 cases or 0.85% (95% CI 0.78% to 0.93%). The new system collected 67% of these events (95% confidence interval 63%-71%). We demonstrate the implementation in an academic anesthesia department of a secure clinical event reporting system linked to an AIMS. The system enforces entry of quality assurance information (either no clinical event or notification of a clinical event). System implementation resulted in capturing nearly twice the number of events at a relatively steady case load. © 2011 International Anesthesia Research Society
Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc
2013-01-01
The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.
Automated video feature extraction : workshop summary report October 10-11 2012.
DOT National Transportation Integrated Search
2012-12-01
This report summarizes a 2-day workshop on automated video feature extraction. Discussion focused on the Naturalistic Driving : Study, funded by the second Strategic Highway Research Program, and also involved the companion roadway inventory dataset....
Helgadóttir, Fjóla Dögg; Menzies, Ross G; Onslow, Mark; Packman, Ann; O'Brian, Sue
2014-09-01
Social anxiety is common for those who stutter and efficacious cognitive behavior therapy (CBT) for them appears viable. However, there are difficulties with provision of CBT services for anxiety among those who stutter. Standalone Internet CBT treatment is a potential solution to those problems. CBTpsych is a fully automated, online social anxiety intervention for those who stutter. This report is a Phase I trial of CBTpsych. Fourteen participants were allowed 5 months to complete seven sections of CBTpsych. Pre-treatment and post-treatment assessments tested for social anxiety, common unhelpful thoughts related to stuttering, quality of life and stuttering frequency. Significant post-treatment improvements in social anxiety, unhelpful thoughts, and quality of life were reported. Five of seven participants diagnosed with social anxiety lost those diagnoses at post-treatment. The two participants who did not lose social anxiety diagnoses did not complete all the CBTpsych modules. CBTpsych did not improve stuttering frequency. Eleven of the fourteen participants who began treatment completed Section 4 or more of the CBTpsych intervention. CBTpsych provides a potential means to provide CBT treatment for social anxiety associated with stuttering, to any client without cost, regardless of location. Further clinical trials are warranted. At the end of this activity the reader will be able to: (a) describe that social anxiety is common in those who stutter; (b) discuss the origin of social anxiety and the associated link with bullying; (c) summarize the problems in provision of effective evidence based cognitive behavior therapy for adults who stutter; (d) describe a scalable computerized treatment designed to tackle the service provision gap; (e) describe the unhelpful thoughts associated with stuttering that this fully automated computer program was able to tackle; (f) list the positive outcomes for individuals who stuttered that participated in this trial such as the reduction of social anxiety symptoms and improvement in the quality of life for individuals who stuttered and participated in this trial. Copyright © 2014 Elsevier Inc. All rights reserved.
The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy.
Fleming, Alan D; Goatman, Keith A; Philip, Sam; Williams, Graeme J; Prescott, Gordon J; Scotland, Graham S; McNamee, Paul; Leese, Graham P; Wykes, William N; Sharp, Peter F; Olson, John A
2010-06-01
Automated grading has the potential to improve the efficiency of diabetic retinopathy screening services. While disease/no disease grading can be performed using only microaneurysm detection and image-quality assessment, automated recognition of other types of lesions may be advantageous. This study investigated whether inclusion of automated recognition of exudates and haemorrhages improves the detection of observable/referable diabetic retinopathy. Images from 1253 patients with observable/referable retinopathy and 6333 patients with non-referable retinopathy were obtained from three grading centres. All images were reference-graded, and automated disease/no disease assessments were made based on microaneurysm detection and combined microaneurysm, exudate and haemorrhage detection. Introduction of algorithms for exudates and haemorrhages resulted in a statistically significant increase in the sensitivity for detection of observable/referable retinopathy from 94.9% (95% CI 93.5 to 96.0) to 96.6% (95.4 to 97.4) without affecting manual grading workload. Automated detection of exudates and haemorrhages improved the detection of observable/referable retinopathy.
Technology transfer potential of an automated water monitoring system. [market research
NASA Technical Reports Server (NTRS)
Jamieson, W. M.; Hillman, M. E. D.; Eischen, M. A.; Stilwell, J. M.
1976-01-01
The nature and characteristics of the potential economic need (markets) for a highly integrated water quality monitoring system were investigated. The technological, institutional and marketing factors that would influence the transfer and adoption of an automated system were studied for application to public and private water supply, public and private wastewater treatment and environmental monitoring of rivers and lakes.
Telerobotics for depot modernization
NASA Technical Reports Server (NTRS)
Leahy, M. B., Jr.; Petroski, S. B.
1994-01-01
Development and application of telerobotics technology for the enhancement of the quality of the Air Logistic Centers (ALC) repair and remanufacturing processes is described. Telerobotics provides the means for bridging the gap between manual operation and full automation. The Robotics and Automation Center for Excellence (RACE) initiated the Unified Telerobotics Architecture Project (UTAP) to support the development and application of telerobotics for depot operation.
ERIC Educational Resources Information Center
Spaulding, Trent Joseph
2011-01-01
The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…
Luan, Peng; Lee, Sophia; Paluch, Maciej; Kansopon, Joe; Viajar, Sharon; Begum, Zahira; Chiang, Nancy; Nakamura, Gerald; Hass, Philip E.; Wong, Athena W.; Lazar, Greg A.
2018-01-01
ABSTRACT To rapidly find “best-in-class” antibody therapeutics, it has become essential to develop high throughput (HTP) processes that allow rapid assessment of antibodies for functional and molecular properties. Consequently, it is critical to have access to sufficient amounts of high quality antibody, to carry out accurate and quantitative characterization. We have developed automated workflows using liquid handling systems to conduct affinity-based purification either in batch or tip column mode. Here, we demonstrate the capability to purify >2000 antibodies per day from microscale (1 mL) cultures. Our optimized, automated process for human IgG1 purification using MabSelect SuRe resin achieves ∼70% recovery over a wide range of antibody loads, up to 500 µg. This HTP process works well for hybridoma-derived antibodies that can be purified by MabSelect SuRe resin. For rat IgG2a, which is often encountered in hybridoma cultures and is challenging to purify via an HTP process, we established automated purification with GammaBind Plus resin. Using these HTP purification processes, we can efficiently recover sufficient amounts of antibodies from mammalian transient or hybridoma cultures with quality comparable to conventional column purification. PMID:29494273
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
A quality-refinement process for medical imaging applications.
Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I
2009-01-01
To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J; Imbergamo, P
The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less
Ibrahim, Sarah A; Martini, Luigi
2014-08-01
Dissolution method transfer is a complicated yet common process in the pharmaceutical industry. With increased pharmaceutical product manufacturing and dissolution acceptance requirements, dissolution testing has become one of the most labor-intensive quality control testing methods. There is an increased trend for automation in dissolution testing, particularly for large pharmaceutical companies to reduce variability and increase personnel efficiency. There is no official guideline for dissolution testing method transfer from a manual, semi-automated, to automated dissolution tester. In this study, a manual multipoint dissolution testing procedure for an enteric-coated aspirin tablet was transferred effectively and reproducibly to a fully automated dissolution testing device, RoboDis II. Enteric-coated aspirin samples were used as a model formulation to assess the feasibility and accuracy of media pH change during continuous automated dissolution testing. Several RoboDis II parameters were evaluated to ensure the integrity and equivalency of dissolution method transfer from a manual dissolution tester. This current study provides a systematic outline for the transfer of the manual dissolution testing protocol to an automated dissolution tester. This study further supports that automated dissolution testers compliant with regulatory requirements and similar to manual dissolution testers facilitate method transfer. © 2014 Society for Laboratory Automation and Screening.
2004-03-01
On all levels of the military command hierarchy there is a strong demand for support through the automated processing of reconnaissance reports. This...preconditions for the improvement of computer support and then illustrates the automated processing of report information using a military ambush situation in
45 CFR 30.13 - Debt reporting and use of credit reporting agencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... agencies. 30.13 Section 30.13 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION... over $100 to credit bureaus or other automated databases. Debts arising under the Social Security Act..., any subsequent reporting to or updating of a credit bureau or other automated database may be handled...
45 CFR 30.13 - Debt reporting and use of credit reporting agencies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... agencies. 30.13 Section 30.13 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION... over $100 to credit bureaus or other automated databases. Debts arising under the Social Security Act..., any subsequent reporting to or updating of a credit bureau or other automated database may be handled...
45 CFR 30.13 - Debt reporting and use of credit reporting agencies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... agencies. 30.13 Section 30.13 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION... over $100 to credit bureaus or other automated databases. Debts arising under the Social Security Act..., any subsequent reporting to or updating of a credit bureau or other automated database may be handled...
45 CFR 30.13 - Debt reporting and use of credit reporting agencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... agencies. 30.13 Section 30.13 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION... over $100 to credit bureaus or other automated databases. Debts arising under the Social Security Act..., any subsequent reporting to or updating of a credit bureau or other automated database may be handled...
Advancing automation and robotics technology for the Space Station Freedom and for the U.S. economy
NASA Technical Reports Server (NTRS)
1993-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the sixteenth in a series of progress updates and covers the period between 15 Sep. 1992 - 16 Mar. 1993. The report describes the progress made by Levels 1, 2, and 3 of the Space Station Freedom in developing and applying advanced automation and robotics technology. Emphasis was placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 15; and includes a status review of Space Station Freedom Launch Processing facilities at Kennedy Space Center. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for Space Station Freedom.
a Critical Review of Automated Photogrammetric Processing of Large Datasets
NASA Astrophysics Data System (ADS)
Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F.
2017-08-01
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
Automating the application of smart materials for protein crystallization.
Khurshid, Sahir; Govada, Lata; El-Sharif, Hazim F; Reddy, Subrayal M; Chayen, Naomi E
2015-03-01
The fabrication and validation of the first semi-liquid nonprotein nucleating agent to be administered automatically to crystallization trials is reported. This research builds upon prior demonstration of the suitability of molecularly imprinted polymers (MIPs; known as `smart materials') for inducing protein crystal growth. Modified MIPs of altered texture suitable for high-throughput trials are demonstrated to improve crystal quality and to increase the probability of success when screening for suitable crystallization conditions. The application of these materials is simple, time-efficient and will provide a potent tool for structural biologists embarking on crystallization trials.
Automated batch characterization of inkjet-printed elastomer lenses using a LEGO platform.
Sung, Yu-Lung; Garan, Jacob; Nguyen, Hoang; Hu, Zhenyu; Shih, Wei-Chuan
2017-09-10
Small, self-adhesive, inkjet-printed elastomer lenses have enabled smartphone cameras to image and resolve microscopic objects. However, the performance of different lenses within a batch is affected by hard-to-control environmental variables. We present a cost-effective platform to perform automated batch characterization of 300 lens units simultaneously for quality inspection. The system was designed and configured with LEGO bricks, 3D printed parts, and a digital camera. The scheme presented here may become the basis of a high-throughput, in-line inspection tool for quality control purposes and can also be employed for optimization of the manufacturing process.
Enhanced visual perception through tone mapping
NASA Astrophysics Data System (ADS)
Harrison, Andre; Mullins, Linda L.; Raglin, Adrienne; Etienne-Cummings, Ralph
2016-05-01
Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.
Framework for Automated GD&T Inspection Using 3D Scanner
NASA Astrophysics Data System (ADS)
Pathak, Vimal Kumar; Singh, Amit Kumar; Sivadasan, M.; Singh, N. K.
2018-04-01
Geometric Dimensioning and Tolerancing (GD&T) is a typical dialect that helps designers, production faculty and quality monitors to convey design specifications in an effective and efficient manner. GD&T has been practiced since the start of machine component assembly but without overly naming it. However, in recent times industries have started increasingly emphasizing on it. One prominent area where most of the industries struggle with is quality inspection. Complete inspection process is mostly human intensive. Also, the use of conventional gauges and templates for inspection purpose highly depends on skill of workers and quality inspectors. In industries, the concept of 3D scanning is not new but is used only for creating 3D drawings or modelling of physical parts. However, the potential of 3D scanning as a powerful inspection tool is hardly explored. This study is centred on designing a procedure for automated inspection using 3D scanner. Linear, geometric and dimensional inspection of the most popular test bar-stepped bar, as a simple example was also carried out as per the new framework. The new generation engineering industries would definitely welcome this automated inspection procedure being quick and reliable with reduced human intervention.
ATALARS Operational Requirements: Automated Tactical Aircraft Launch and Recovery System
DOT National Transportation Integrated Search
1988-04-01
The Automated Tactical Aircraft Launch and Recovery System (ATALARS) is a fully automated air traffic management system intended for the military service but is also fully compatible with civil air traffic control systems. This report documents a fir...
DOT National Transportation Integrated Search
1997-05-01
This report documents and evaluates an advanced Paratransit system demonstration project. The Santa Clara Valley Transportation Agency (SCVTA), via OUTREACH, implemented such a system, comprised of an automated trip scheduling system (ATSS) and autom...
Comparison of Actual Costs to Integrate Commercial Buildings with the Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piette, Mary Ann; Black, Doug; Yin, Rongxin
During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This paper discusses the impact factors that contribute to the costs of automated DR systems, with a focus on OpenADR 1.0 and 2.0 systems. In addition, this report comparesmore » cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems, and applies a standard naming convention and classification or taxonomy for system elements. In summary, median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude greater or less than median. Costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total such costs.« less
Patient Scenarios Illustrating Benefits of Automation in DoD Medical Treatment Facilities.
1981-10-23
d-ntif by block nmber) This report outlines the difference that automation may make in patient encounters within the military health care system. Two...automation may make in patient encounters with the military health care system, as part of a task to characterize the benefit set of automation in...FI-RI4 323 PATIENT SCENARIOS ILLUSTRATING BENEFITS OF AUTOM ATION 1/1 IDOD MEDICAL TREATMENT FACILITIES(U) LITTLE (ARTHUR D) INC CAMBRIDGE MR
Report of a workshop on human-automation interaction in NGATS
DOT National Transportation Integrated Search
2006-10-01
This report reviews the findings of a workshop held in Arlington, VA may 10 and 11, 2006 to consider needs for research on human-automation interaction to support NASA/FAA Joint Planning and Development Office. Participants included representatives f...
Integrated data systems : a summary report.
DOT National Transportation Integrated Search
1975-01-01
The purpose of this report is to provide a general outline of the automated data systems work under way at the Research Council. Included is a discussion of file contents, automated procedures, and outputs provided. In addition, a time schedule for i...
Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding
2012-01-01
Report No: CG-D-15-13 Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding...Distribution Statement A: Approved for public release; distribution is unlimited. January 2012 Automated Protist Analysis of Complex Samples...Chelsea Street New London, CT 06320 Automated Protist Analysis of Complex Samples iii UNCLAS//PUBLIC | CG-926 R&DC | B. Nelson, et al
NASA Technical Reports Server (NTRS)
Nunamaker, Robert R.; Willshire, Kelli F.
1988-01-01
The reports of a committee established by Congress to identify specific systems of the Space Station which would advance automation and robotics technologies are reviewed. The history of the committee, its relation to NASA, and the reports which it has released are discussed. The committee's reports recommend the widespread use of automation and robotics for the Space Station, a program for technology development and transfer between industries and research and development communities, and the planned use of robots to service and repair satellites and their payloads which are accessible from the Space Station.
Update on Development of Mesh Generation Algorithms in MeshKit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay
2015-09-30
MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKitmore » are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.« less
AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.
Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian
2009-01-01
The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous education in anatomy and pathology. First attempts to introduce them into routine work have been reported. Application of AI has been established by automated immunohistochemical measurement systems (EAMUS, www.diagnomX.eu). The performance of automated diagnosis has been reported for a broad variety of organs at sensitivity and specificity levels >85%). The implementation of a complete connected AI supported system is in its childhood. Application of AI in digital tissue--based diagnosis will allow the pathologists to work as supervisors and no longer as primary "water carriers". Its accurate use will give them the time needed to concentrating on difficult cases for the benefit of their patients.
Flexible automated approach for quantitative liquid handling of complex biological samples.
Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H
2007-11-01
A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.
Automated peak picking and peak integration in macromolecular NMR spectra using AUTOPSY.
Koradi, R; Billeter, M; Engeli, M; Güntert, P; Wüthrich, K
1998-12-01
A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automated peak picking for NMR spectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking. Copyright 1998 Academic Press.
Automated X-ray image analysis for cargo security: Critical review and future promise.
Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D
2017-01-01
We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.
Gorgulho, Bartira Mendes; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo
2013-08-01
The objective of the study is to evaluate the nutritional quality of meals consumed away from home and its association with overall diet quality. Data was obtained from 834 participants of a Health Survey in São Paulo, Brazil. Food intake was measured by a 24-hour dietary recall applied telephonically using the Automated Multiple-Pass Method. Overall dietary quality was assessed by the Brazilian Healthy Eating Index Revised (B-HEIR) and the Meal Quality Index (MQI) was used to evaluate dietary quality of the main meals. The association between the B-HEIR and the MQI was assessed by linear regression analysis. The consumption of at least one of the three main meals away from home was reported for 32% of respondents (70 adolescents, 156 adults and 40 elderly). The average MQI score of lunch consumed away from home was lower than lunch consumed at home, with higher amounts of total and saturated fats. The average score of B-HEIR was 58 points and was associated with the MQI score, energy, meal consumption location and gender. Lunch consumed away from home presented the worst quality, being higher in total and saturated fat. However, the meals consumed at home also need improvement. Copyright © 2013 Elsevier Inc. All rights reserved.
A Validation of Remotely Sensed Fires Using Ground Reports
NASA Astrophysics Data System (ADS)
Ruminski, M. G.; Hanna, J.
2007-12-01
A satellite based analysis of fire detections and smoke emissions for North America is produced daily by NOAA/NESDIS. The analysis incorporates data from the MODIS (Terra and Aqua) and AVHRR (NOAA-15/16/17) polar orbiting instruments and GOES East and West geostationary spacecraft with nominal resolutions of 1km and 4 km for the polar and geostationary platforms respectively. Automated fire detection algorithms are utilized for each of the sensors. Analysts perform a quality control procedure on the automated detects by deleting points that are deemed to be false detects and adding points that the algorithms did not detect. A limited validation of the final quality controlled product was performed using high resolution (30 m) ASTER data in the summer of 2006. Some limitations in using ASTER data are that each scene is only approximately 3600 square km, the data acquisition time is relatively constant at around 1030 local solar time and ASTER is another remotely sensed data source. This study expands on the ASTER validation by using ground reports of prescribed burns in Montana and Idaho for 2003 and 2004. It provides a non-remote sensing data source for comparison. While the ground data do not have the limitations noted above for ASTER there are still limitations. For example, even though the data set covers a much larger area (nearly 600,000 square km) than even several ASTER scenes, it still represents a single region of North America. And while the ground data are not restricted to a narrow time window, only a date is provided with each report, limiting the ability to make detailed conclusions about the detection capabilities for specific instruments, especially for the less temporally frequent polar orbiting MODIS and AVHRR sensors. Comparison of the ground data reports to the quality controlled fire analysis revealed a low rate of overall detection of 23.00% over the entire study period. Examination of the daily detection rates revealed a wide variation, with some days resulting in as little as 5 detects out of 107 reported fires while other days had as many as 84 detections out of 160 reports. Inspection of the satellite imagery from the days with very low detection rates revealed that extensive cloud cover prohibited satellite fire detection. On days when cloud cover was at a minimum, detection rates were substantially higher. An estimate of the fire size was also provided with the ground data set. Statistics will be presented for days with minimal cloud cover which will indicate the probability of detection for fires of various sizes.
Automated Video Quality Assessment for Deep-Sea Video
NASA Astrophysics Data System (ADS)
Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.
2015-12-01
Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.
Effective Materials Property Information Management for the 21st Century
NASA Technical Reports Server (NTRS)
Ren, Weiju; Cebon, David; Arnold, Steve
2009-01-01
This paper discusses key principles for the development of materials property information management software systems. There are growing needs for automated materials information management in various organizations. In part these are fueled by the demands for higher efficiency in material testing, product design and engineering analysis. But equally important, organizations are being driven by the need for consistency, quality and traceability of data, as well as control of access to sensitive information such as proprietary data. Further, the use of increasingly sophisticated nonlinear, anisotropic and multi-scale engineering analyses requires both processing of large volumes of test data for development of constitutive models and complex materials data input for Computer-Aided Engineering (CAE) software. And finally, the globalization of economy often generates great needs for sharing a single "gold source" of materials information between members of global engineering teams in extended supply chains. Fortunately, material property management systems have kept pace with the growing user demands and evolved to versatile data management systems that can be customized to specific user needs. The more sophisticated of these provide facilities for: (i) data management functions such as access, version, and quality controls; (ii) a wide range of data import, export and analysis capabilities; (iii) data "pedigree" traceability mechanisms; (iv) data searching, reporting and viewing tools; and (v) access to the information via a wide range of interfaces. In this paper the important requirements for advanced material data management systems, future challenges and opportunities such as automated error checking, data quality characterization, identification of gaps in datasets, as well as functionalities and business models to fuel database growth and maintenance are discussed.
Arab, Lenore; Hahn, Harry; Henry, Judith; Chacko, Sara; Winter, Ashley; Cambou, Mary C
2010-03-01
Screening and tracking subjects and data management in clinical trials require significant investments in manpower that can be reduced through the use of web-based systems. To support a validation trial of various dietary assessment tools that required multiple clinic visits and eight repeats of online assessments, we developed an interactive web-based system to automate all levels of management of a biomarker-based clinical trial. The "Energetics System" was developed to support 1) the work of the study coordinator in recruiting, screening and tracking subject flow, 2) the need of the principal investigator to review study progress, and 3) continuous data analysis. The system was designed to automate web-based self-screening into the trial. It supported scheduling tasks and triggered tailored messaging for late and non-responders. For the investigators, it provided real-time status overviews on all subjects, created electronic case reports, supported data queries and prepared analytic data files. Encryption and multi-level password protection were used to insure data privacy. The system was programmed iteratively and required six months of a web programmer's time along with active team engagement. In this study the enhancement in speed and efficiency of recruitment and quality of data collection as a result of this system outweighed the initial investment. Web-based systems have the potential to streamline the process of recruitment and day-to-day management of clinical trials in addition to improving efficiency and quality. Because of their added value they should be considered for trials of moderate size or complexity. Copyright 2009 Elsevier Inc. All rights reserved.
Network-based production quality control
NASA Astrophysics Data System (ADS)
Kwon, Yongjin; Tseng, Bill; Chiou, Richard
2007-09-01
This study investigates the feasibility of remote quality control using a host of advanced automation equipment with Internet accessibility. Recent emphasis on product quality and reduction of waste stems from the dynamic, globalized and customer-driven market, which brings opportunities and threats to companies, depending on the response speed and production strategies. The current trends in industry also include a wide spread of distributed manufacturing systems, where design, production, and management facilities are geographically dispersed. This situation mandates not only the accessibility to remotely located production equipment for monitoring and control, but efficient means of responding to changing environment to counter process variations and diverse customer demands. To compete under such an environment, companies are striving to achieve 100%, sensor-based, automated inspection for zero-defect manufacturing. In this study, the Internet-based quality control scheme is referred to as "E-Quality for Manufacturing" or "EQM" for short. By its definition, EQM refers to a holistic approach to design and to embed efficient quality control functions in the context of network integrated manufacturing systems. Such system let designers located far away from the production facility to monitor, control and adjust the quality inspection processes as production design evolves.
Automated cockpits special report, part 1.
1995-01-30
Part one of this report includes the following articles: Accidents Direct Focus on Cockpit Automation; Modern Cockpit Complexity Challenges Pilot Interfaces; Airbus Seeks to Keep Pilot, New Technology in harmony; NTSB: Mode Confusion Poses Safety Threat; and, Certification Officials grapple with Flight Deck Complexity.
Information Tailoring Enhancements for Large-Scale Social Data
2016-06-15
Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER
Alexander, Crispin G.; Wanner, Randy; Johnson, Christopher M.; Breitsprecher, Dennis; Winter, Gerhard; Duhr, Stefan; Baaske, Philipp; Ferguson, Neil
2014-01-01
Chemical denaturant titrations can be used to accurately determine protein stability. However, data acquisition is typically labour intensive, has low throughput and is difficult to automate. These factors, combined with high protein consumption, have limited the adoption of chemical denaturant titrations in commercial settings. Thermal denaturation assays can be automated, sometimes with very high throughput. However, thermal denaturation assays are incompatible with proteins that aggregate at high temperatures and large extrapolation of stability parameters to physiological temperatures can introduce significant uncertainties. We used capillary-based instruments to measure chemical denaturant titrations by intrinsic fluorescence and microscale thermophoresis. This allowed higher throughput, consumed several hundred-fold less protein than conventional, cuvette-based methods yet maintained the high quality of the conventional approaches. We also established efficient strategies for automated, direct determination of protein stability at a range of temperatures via chemical denaturation, which has utility for characterising stability for proteins that are difficult to purify in high yield. This approach may also have merit for proteins that irreversibly denature or aggregate in classical thermal denaturation assays. We also developed procedures for affinity ranking of protein–ligand interactions from ligand-induced changes in chemical denaturation data, and proved the principle for this by correctly ranking the affinity of previously unreported peptide–PDZ domain interactions. The increased throughput, automation and low protein consumption of protein stability determinations afforded by using capillary-based methods to measure denaturant titrations, can help to revolutionise protein research. We believe that the strategies reported are likely to find wide applications in academia, biotherapeutic formulation and drug discovery programmes. PMID:25262836
Ti, Lian Kah; Ang, Sophia Bee Leng; Saw, Sharon; Sethi, Sunil Kumar; Yip, James W L
2012-08-01
Timely reporting and acknowledgement are crucial steps in critical laboratory results (CLR) management. The authors previously showed that an automated pathway incorporating short messaging system (SMS) texts, auto-escalation, and manual telephone back-up improved the rate and speed of physician acknowledgement compared with manual telephone calling alone. This study investigated if it also improved the rate and speed of physician intervention to CLR and whether utilising the manual back-up affected intervention rates. Data from seven audits between November 2007 and January 2011 were analysed. These audits were carried out to assess the robustness of CLR reporting process in the authors' institution. Comparisons were made in the rate and speed of acknowledgement and intervention between the audits performed before and after automation. Using the automation audits, the authors compared intervention data between communication with SMS only and when manual intervention was required. 1680 CLR were reported during the audit periods. Automation improved the rate (100% vs 84.2%; p<0.001) and speed (median 12 min vs 23 min; p<0.001) of CLR acknowledgement. It also improved the rate (93.7% vs 84.0%, p<0.001) and speed (median 21 min vs 109 min; p<0.001) of CLR intervention. From the automation audits, the use of SMS only did not improve physician intervention rates. The automated communication pathway improved physician intervention rate and time in tandem with improved acknowledgement rate and time when compared with manual telephone calling. The use of manual intervention to augment automation did not adversely affect physician intervention rate, implying that an end-to-end pathway was more important than automation alone.
Defense Agencies Initiative Increment 2 (DAI Inc 2)
2016-03-01
2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...Automated Information System MAIS OE - MAIS Original Estimate MAR – MAIS Annual Report MDA - Milestone Decision Authority MDD - Materiel Development...management systems supporting diverse operational functions and the warfighter in decision making and financial reporting . These disparate, non
Analysis of Content Shared in Online Cancer Communities: Systematic Review
van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje
2018-01-01
Background The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. Objective The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. Methods A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. Results The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social aspects, and 6 studies focused on emotional changes. Conclusions In the future, increasing amounts of user-generated content will become available on the internet. The results of content analysis, especially of the larger studies, give detailed insights into patients’ concerns and worries, which can then be used to improve cancer care. To make the results of such analyses as usable as possible, automatic content analysis methods will need to be improved through interdisciplinary collaboration. PMID:29615384
Miroshnichenko, Iu V; Umarov, S Z
2012-12-01
One of the ways of increase of effectiveness and safety of patients medication supplement is the use of automated systems of distribution, through which substantially increases the efficiency and safety of patients' medication supplement, achieves significant economy of material and financial resources for medication assistance and possibility of systematical improvement of its accessibility and quality.
Harvester-based sensing system for cotton fiber-quality mapping
USDA-ARS?s Scientific Manuscript database
Precision agriculture in cotton production attempts to maximize profitability by exploiting information on field spatial variability to optimize the fiber yield and quality. For precision agriculture to be economically viable, collection of spatial variability data within a field must be automated a...
Kirkpatrick, Sharon I; Subar, Amy F; Douglass, Deirdre; Zimmerman, Thea P; Thompson, Frances E; Kahle, Lisa L; George, Stephanie M; Dodd, Kevin W; Potischman, Nancy
2014-07-01
The Automated Self-Administered 24-hour Recall (ASA24), a freely available Web-based tool, was developed to enhance the feasibility of collecting high-quality dietary intake data from large samples. The purpose of this study was to assess the criterion validity of ASA24 through a feeding study in which the true intake for 3 meals was known. True intake and plate waste from 3 meals were ascertained for 81 adults by inconspicuously weighing foods and beverages offered at a buffet before and after each participant served him- or herself. Participants were randomly assigned to complete an ASA24 or an interviewer-administered Automated Multiple-Pass Method (AMPM) recall the following day. With the use of linear and Poisson regression analysis, we examined the associations between recall mode and 1) the proportions of items consumed for which a match was reported and that were excluded, 2) the number of intrusions (items reported but not consumed), and 3) differences between energy, nutrient, food group, and portion size estimates based on true and reported intakes. Respondents completing ASA24 reported 80% of items truly consumed compared with 83% in AMPM (P = 0.07). For both ASA24 and AMPM, additions to or ingredients in multicomponent foods and drinks were more frequently omitted than were main foods or drinks. The number of intrusions was higher in ASA24 (P < 0.01). Little evidence of differences by recall mode was found in the gap between true and reported energy, nutrient, and food group intakes or portion sizes. Although the interviewer-administered AMPM performed somewhat better relative to true intakes for matches, exclusions, and intrusions, ASA24 performed well. Given the substantial cost savings that ASA24 offers, it has the potential to make important contributions to research aimed at describing the diets of populations, assessing the effect of interventions on diet, and elucidating diet and health relations. This trial was registered at clinicaltrials.gov as NCT00978406. © 2014 American Society for Nutrition.
ERIC Educational Resources Information Center
Zhang, Mo
2013-01-01
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores…
Quantity is nothing without quality: automated QA/QC for streaming sensor networks
John L. Campbell; Lindsey E. Rustad; John H. Porter; Jeffrey R. Taylor; Ethan W. Dereszynski; James B. Shanley; Corinna Gries; Donald L. Henshaw; Mary E. Martin; Wade. M. Sheldon; Emery R. Boose
2013-01-01
Sensor networks are revolutionizing environmental monitoring by producing massive quantities of data that are being made publically available in near real time. These data streams pose a challenge for ecologists because traditional approaches to quality assurance and quality control are no longer practical when confronted with the size of these data sets and the...
The rate of cis-trans conformation errors is increasing in low-resolution crystal structures.
Croll, Tristan Ian
2015-03-01
Cis-peptide bonds (with the exception of X-Pro) are exceedingly rare in native protein structures, yet a check for these is not currently included in the standard workflow for some common crystallography packages nor in the automated quality checks that are applied during submission to the Protein Data Bank. This appears to be leading to a growing rate of inclusion of spurious cis-peptide bonds in low-resolution structures both in absolute terms and as a fraction of solved residues. Most concerningly, it is possible for structures to contain very large numbers (>1%) of spurious cis-peptide bonds while still achieving excellent quality reports from MolProbity, leading to concerns that ignoring such errors is allowing software to overfit maps without producing telltale errors in, for example, the Ramachandran plot.
Robotics/Automated Systems Technicians.
ERIC Educational Resources Information Center
Doty, Charles R.
Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…
Merrill, J; Phillips, A; Keeling, J; Kaushal, R; Senathirajah, Y
2013-01-01
Among the expected benefits of electronic health records (EHRs) is increased reporting of public health information, such as immunization status. State and local immunization registries aid control of vaccine-preventable diseases and help offset fragmentation in healthcare, but reporting is often slow and incomplete. The Primary Care Information Project (PCIP), an initiative of the NYC Department of Health and Mental Hygiene, has implemented EHRs with immunization reporting capability in community settings. To evaluate the effect of automated reporting via an EHR on use and efficiency of reporting to the NY Citywide Immunization Registry, we conducted a secondary analysis of 1.7 million de-identified records submitted between January 2007 and June 2011 by 217 primary care practices enrolled in PCIP, pre and post launch of automated reporting via an EHR. We examined differences in records submitted per day, lag time, and documentation of eligibility for subsidized vaccines. Mean submissions per day did not change. Automated submissions of new and historical records increased by 18% and 98% respectively. Submissions within 14 days increased from 84% to 87%, and within 2 days increased from 60% to 77%. Median lag time decreased from 13 to 10 days. Documentation of eligibility decreased. Results are significant at p<0.001. Significant improvements in registry use and efficiency of reporting were found after launch of automated reporting via an EHR. A decrease in eligibility documentation was attributed to EHR workflow. The limitations to comprehensive evaluation found in these data, which were extracted from a registry initiated prior to widespread EHR implementation suggests that reliable evaluation of immunization reporting via the EHR may require modifications to legacy registry databases.
Cockpit Automation Technology CSERIAC-CAT
1991-06-01
AD-A273 124 AL-TR-1991-0078 A R COCKPIT AUTOMATION TECHNOLOGY M CSERIAC- CAT S JULY 1989 - DEC 1990: FINAL REPORT T R Trudy S. Abrams Cindy D. Martin...TITLE AND SUBTITLE 5. FUNDING NUMBERS Cockpit Automation Technology CSERIAC- CAT JUL 89 - DEC 90 PE 62202F Final Report (U) PR 7184 ,___,TA 12 6. AUTHOR(S...Boeing-developed CAT software tools, and for facilitating their use by the cockpit design community. A brief description of the overall task is given
Design, Construction, Demonstration and Delivery of an Automated Narrow Gap Welding System.
1983-03-31
evaluated on the Narrow Gap welding system. By using the combinational qas shielding assembly, it is now possible to reduce the gas flow rates to a value...AD-A145 496 DESIGN CONSTRUCTION DEMONSTRATION AND DE IVER OF AN AUTOMATED NARROW GAP WELDING SYSTEM(U) CRC AUTOMATIC WELDING CO HODSTON SX 31 MAR 83...STANDARDS-963 - A CRC REPORT NO. NAV A/W 7 0PHASE 3 REPORT ON SDESIGN, CONSTRUCTION, DEMONSTRATION AND DELIVERY OF AN AUTOMATED NARROW GAP WELDING
Granato, G.E.; Smith, K.P.
1999-01-01
Robowell is an automated process for monitoring selected ground water quality properties and constituents by pumping a well or multilevel sampler. Robowell was developed and tested to provide a cost-effective monitoring system that meets protocols expected for manual sampling. The process uses commercially available electronics, instrumentation, and hardware, so it can be configured to monitor ground water quality using the equipment, purge protocol, and monitoring well design most appropriate for the monitoring site and the contaminants of interest. A Robowell prototype was installed on a sewage treatment plant infiltration bed that overlies a well-studied unconfined sand and gravel aquifer at the Massachusetts Military Reservation, Cape Cod, Massachusetts, during a time when two distinct plumes of constituents were released. The prototype was operated from May 10 to November 13, 1996, and quality-assurance/quality-control measurements demonstrated that the data obtained by the automated method was equivalent to data obtained by manual sampling methods using the same sampling protocols. Water level, specific conductance, pH, water temperature, dissolved oxygen, and dissolved ammonium were monitored by the prototype as the wells were purged according to U.S Geological Survey (USGS) ground water sampling protocols. Remote access to the data record, via phone modem communications, indicated the arrival of each plume over a few days and the subsequent geochemical reactions over the following weeks. Real-time availability of the monitoring record provided the information needed to initiate manual sampling efforts in response to changes in measured ground water quality, which proved the method and characterized the screened portion of the plume in detail through time. The methods and the case study described are presented to document the process for future use.
Automated analysis of brachial ultrasound time series
NASA Astrophysics Data System (ADS)
Liang, Weidong; Browning, Roger L.; Lauer, Ronald M.; Sonka, Milan
1998-07-01
Atherosclerosis begins in childhood with the accumulation of lipid in the intima of arteries to form fatty streaks, advances through adult life when occlusive vascular disease may result in coronary heart disease, stroke and peripheral vascular disease. Non-invasive B-mode ultrasound has been found useful in studying risk factors in the symptom-free population. Large amount of data is acquired from continuous imaging of the vessels in a large study population. A high quality brachial vessel diameter measurement method is necessary such that accurate diameters can be measured consistently in all frames in a sequence, across different observers. Though human expert has the advantage over automated computer methods in recognizing noise during diameter measurement, manual measurement suffers from inter- and intra-observer variability. It is also time-consuming. An automated measurement method is presented in this paper which utilizes quality assurance approaches to adapt to specific image features, to recognize and minimize the noise effect. Experimental results showed the method's potential for clinical usage in the epidemiological studies.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
A Fully Automated Approach to Spike Sorting.
Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F
2017-09-13
Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1982-01-01
An automated water quality monitoring system was developed by Langley Research Center to meet a need of the Environmental Protection Agency (EPA). Designed for unattended operation in water depths up to 100 feet, the system consists of a subsurface buoy anchored in the water, a surface control unit (SCU) and a hydrophone link for acoustic communication between buoy and SCU. Primary functional unit is the subsurface buoy. It incorporates 16 cells for water sampling, plus sensors for eight water quality measurements. Buoy contains all the electronic equipment needed for collecting and storing sensor data, including a microcomputer and a memory unit. Power for the electronics is supplied by a rechargeable nickel cadmium battery that is designed to operate for about two weeks. Through hydrophone link the subsurface buoy reports its data to the SCU, which relays it to land stations. Link allows two-way communications. If system encounters a problem, it automatically shuts down and sends alert signal. Sequence of commands sent via hydrophone link causes buoy to release from anchor and float to the surface for recovery.
Mueller, David S.
2016-05-12
The software program, QRev computes the discharge from moving-boat acoustic Doppler current profiler measurements using data collected with any of the Teledyne RD Instrument or SonTek bottom tracking acoustic Doppler current profilers. The computation of discharge is independent of the manufacturer of the acoustic Doppler current profiler because QRev applies consistent algorithms independent of the data source. In addition, QRev automates filtering and quality checking of the collected data and provides feedback to the user of potential quality issues with the measurement. Various statistics and characteristics of the measurement, in addition to a simple uncertainty assessment are provided to the user to assist them in properly rating the measurement. QRev saves an extensible markup language file that can be imported into databases or electronic field notes software. The user interacts with QRev through a tablet-friendly graphical user interface. This report is the manual for version 2.8 of QRev.
Narrative writing: Effective ways and best practices
Ledade, Samir D.; Jain, Shishir N.; Darji, Ankit A.; Gupta, Vinodkumar H.
2017-01-01
A narrative is a brief summary of specific events experienced by patients, during the course of a clinical trial. Narrative writing involves multiple activities such as generation of patient profiles, review of data sources, and identification of events for which narratives are required. A sponsor outsources narrative writing activities to leverage the expertise of service providers which in turn requires effective management of resources, cost, time, quality, and overall project management. Narratives are included as an appendix to the clinical study report and are submitted to the regulatory authorities as a part of dossier. Narratives aid in the evaluation of the safety profile of the investigational drug under study. To deliver high-quality narratives within the specified timeframe to the sponsor can be achieved by standardizing processes, increasing efficiency, optimizing working capacity, implementing automation, and reducing cost. This paper focuses on effective ways to design narrative writing process and suggested best practices, which enable timely delivery of high-quality narratives to fulfill the regulatory requirement. PMID:28447014
Kovner, Christine; Harrington, Charlene; Greene, William; Mezey, Mathy
2009-01-01
Objective To examine the relationships between nursing staffing levels and nursing home deficiencies. Methods This panel data analysis employed random-effect models that adjusted for unobserved, nursing home–specific heterogeneity over time. Data were obtained from California's long-term care annual cost report data and the Automated Certification and Licensing Administrative Information and Management Systems data from 1999 to 2003, linked with other secondary data sources. Results Both total nursing staffing and registered nurse (RN) staffing levels were negatively related to total deficiencies, quality of care deficiencies, and serious deficiencies that may cause harm or jeopardy to nursing home residents. Nursing homes that met the state staffing standard received fewer total deficiencies and quality of care deficiencies than nursing homes that failed to meet the standard. Meeting the state staffing standard was not related to receiving serious deficiencies. Conclusions Total nursing staffing and RN staffing levels were predictors of nursing home quality. Further research is needed on the effectiveness of state minimum staffing standards. PMID:19181692
Narrative writing: Effective ways and best practices.
Ledade, Samir D; Jain, Shishir N; Darji, Ankit A; Gupta, Vinodkumar H
2017-01-01
A narrative is a brief summary of specific events experienced by patients, during the course of a clinical trial. Narrative writing involves multiple activities such as generation of patient profiles, review of data sources, and identification of events for which narratives are required. A sponsor outsources narrative writing activities to leverage the expertise of service providers which in turn requires effective management of resources, cost, time, quality, and overall project management. Narratives are included as an appendix to the clinical study report and are submitted to the regulatory authorities as a part of dossier. Narratives aid in the evaluation of the safety profile of the investigational drug under study. To deliver high-quality narratives within the specified timeframe to the sponsor can be achieved by standardizing processes, increasing efficiency, optimizing working capacity, implementing automation, and reducing cost. This paper focuses on effective ways to design narrative writing process and suggested best practices, which enable timely delivery of high-quality narratives to fulfill the regulatory requirement.
The application of automated operations at the Institutional Processing Center
NASA Technical Reports Server (NTRS)
Barr, Thomas H.
1993-01-01
The JPL Institutional and Mission Computing Division, Communications, Computing and Network Services Section, with its mission contractor, OAO Corporation, have for some time been applying automation to the operation of JPL's Information Processing Center (IPC). Automation does not come in one easy to use package. Automation for a data processing center is made up of many different software and hardware products supported by trained personnel. The IPC automation effort formally began with console automation, and has since spiraled out to include production scheduling, data entry, report distribution, online reporting, failure reporting and resolution, documentation, library storage, and operator and user education, while requiring the interaction of multi-vendor and locally developed software. To begin the process, automation goals are determined. Then a team including operations personnel is formed to research and evaluate available options. By acquiring knowledge of current products and those in development, taking an active role in industry organizations, and learning of other data center's experiences, a forecast can be developed as to what direction technology is moving. With IPC management's approval, an implementation plan is developed and resources identified to test or implement new systems. As an example, IPC's new automated data entry system was researched by Data Entry, Production Control, and Advance Planning personnel. A proposal was then submitted to management for review. A determination to implement the new system was made and elements/personnel involved with the initial planning performed the implementation. The final steps of the implementation were educating data entry personnel in the areas effected and procedural changes necessary to the successful operation of the new system.
Advancing automation and robotics technology for the Space Station Freedom and for the U.S. Economy
NASA Technical Reports Server (NTRS)
1991-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on Space Station Freedom. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the thirteenth in a series of progress updates and covers the period between 14 Feb. - 15 Aug. 1991. The progress made by Levels 1, 2, and 3 of the Space Station Freedom in developing and applying advanced automation and robotics technology is described. Emphasis was placed upon the Space Station Freedom Program responses to specific recommendations made in ATAC Progress Report 12, and issues of A&R implementation into Ground Mission Operations and A&R enhancement of science productivity. Assessments are presented for these and other areas as they apply to the advancement of automation and robotics technology for Space Station Freedom.
Advancing automation and robotics technology for the space station and for the US economy
NASA Technical Reports Server (NTRS)
Nunamaker, Robert
1988-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on the Space Station. This material was documented in the initial report (NASA Technical Memo 87566). A further requirement of the law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the sixth in a series of progress updates and covers the period between October 1, 1987 and March 1, 1988. NASA has accepted the basic recommendations of ATAC for its Space Station efforts. ATAC and NASA agree that the thrust of Congress is to build an advanced automation and robotics technology base that will support an evolutionary Space Station program and serve as a highly visible stimulator affecting the U.S. long-term economy. The progress report identifies the work of NASA and the Space Station study contractors, research in progress, and issues connected with the advancement of automation and robotics technology on the Space Station.
Advancing automation and robotics technology for the space station and for the US economy
NASA Technical Reports Server (NTRS)
1986-01-01
In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on the Space Station. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the Law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the second in a series of progress updates and covers the period between October 4, 1985, and March 31, l986. NASA has accepted the basic recommendations of ATAC for its Space Station efforts. ATAC and NASA agree that thrust of Congress is to build an advanced automation and robotics technology base that will support an evolutionary Space Station Program and serve as a highly visible stimulator effecting the U.S. long-term economy. The progress report identifies the work of NASA and the Space Station study contractors, research in progress, and issues connected with the advancement of automation and robotics technology on the Space Station.
MO-PIS-Exhibit Hall-01: Tools for TG-142 Linac Imaging QA I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clements, M; Wiesmeyer, M
2014-06-15
Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The therapy topic this year is solutions for TG-142 recommendations for linear accelerator imaging QA. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Automated Imaging QA for TG-142 with RIT Presentation Time: 2:45 – 3:15 PM This presentation will discuss software tools for automated imaging QA and phantom analysis for TG-142.more » All modalities used in radiation oncology will be discussed, including CBCT, planar kV imaging, planar MV imaging, and imaging and treatment coordinate coincidence. Vendor supplied phantoms as well as a variety of third-party phantoms will be shown, along with appropriate analyses, proper phantom setup procedures and scanning settings, and a discussion of image quality metrics. Tools for process automation will be discussed which include: RIT Cognition (machine learning for phantom image identification), RIT Cerberus (automated file system monitoring and searching), and RunQueueC (batch processing of multiple images). In addition to phantom analysis, tools for statistical tracking, trending, and reporting will be discussed. This discussion will include an introduction to statistical process control, a valuable tool in analyzing data and determining appropriate tolerances. An Introduction to TG-142 Imaging QA Using Standard Imaging Products Presentation Time: 3:15 – 3:45 PM Medical Physicists want to understand the logic behind TG-142 Imaging QA. What is often missing is a firm understanding of the connections between the EPID and OBI phantom imaging, the software “algorithms” that calculate the QA metrics, the establishment of baselines, and the analysis and interpretation of the results. The goal of our brief presentation will be to establish and solidify these connections. Our talk will be motivated by the Standard Imaging, Inc. phantom and software solutions. We will present and explain each of the image quality metrics in TG-142 in terms of the theory, mathematics, and algorithms used to implement them in the Standard Imaging PIPSpro software. In the process, we will identify the regions of phantom images that are analyzed by each algorithm. We then will discuss the process of the creation of baselines and typical ranges of acceptable values for each imaging quality metric.« less
Automated branching pattern report generation for laparoscopic surgery assistance
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku
2015-05-01
This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L.; Page, Terry L.; Bhuva, Bharat; Broadie, Kendal
2016-01-01
Background Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. New Method The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. Results The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24 hours) are comparable to traditional manual experiments, while minimizing experimenter involvement. Comparison with Existing Methods The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ~$500US, making it affordable to a wide range of investigators. Conclusions This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. PMID:26703418
Ramakumar, Adarsh; Subramanian, Uma; Prasanna, Pataje G S
2015-11-01
High-throughput individual diagnostic dose assessment is essential for medical management of radiation-exposed subjects after a mass casualty. Cytogenetic assays such as the Dicentric Chromosome Assay (DCA) are recognized as the gold standard by international regulatory authorities. DCA is a multi-step and multi-day bioassay. DCA, as described in the IAEA manual, can be used to assess dose up to 4-6 weeks post-exposure quite accurately but throughput is still a major issue and automation is very essential. The throughput is limited, both in terms of sample preparation as well as analysis of chromosome aberrations. Thus, there is a need to design and develop novel solutions that could utilize extensive laboratory automation for sample preparation, and bioinformatics approaches for chromosome-aberration analysis to overcome throughput issues. We have transitioned the bench-based cytogenetic DCA to a coherent process performing high-throughput automated biodosimetry for individual dose assessment ensuring quality control (QC) and quality assurance (QA) aspects in accordance with international harmonized protocols. A Laboratory Information Management System (LIMS) is designed, implemented and adapted to manage increased sample processing capacity, develop and maintain standard operating procedures (SOP) for robotic instruments, avoid data transcription errors during processing, and automate analysis of chromosome-aberrations using an image analysis platform. Our efforts described in this paper intend to bridge the current technological gaps and enhance the potential application of DCA for a dose-based stratification of subjects following a mass casualty. This paper describes one such potential integrated automated laboratory system and functional evolution of the classical DCA towards increasing critically needed throughput. Published by Elsevier B.V.
Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal
2016-03-01
Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.
Devine, Emily Beth; Capurro, Daniel; van Eaton, Erik; Alfonso-Cristancho, Rafael; Devlin, Allison; Yanez, N. David; Yetisgen-Yildiz, Meliha; Flum, David R.; Tarczy-Hornoch, Peter
2013-01-01
Background: The field of clinical research informatics includes creation of clinical data repositories (CDRs) used to conduct quality improvement (QI) activities and comparative effectiveness research (CER). Ideally, CDR data are accurately and directly abstracted from disparate electronic health records (EHRs), across diverse health-systems. Objective: Investigators from Washington State’s Surgical Care Outcomes and Assessment Program (SCOAP) Comparative Effectiveness Research Translation Network (CERTAIN) are creating such a CDR. This manuscript describes the automation and validation methods used to create this digital infrastructure. Methods: SCOAP is a QI benchmarking initiative. Data are manually abstracted from EHRs and entered into a data management system. CERTAIN investigators are now deploying Caradigm’s Amalga™ tool to facilitate automated abstraction of data from multiple, disparate EHRs. Concordance is calculated to compare data automatically to manually abstracted. Performance measures are calculated between Amalga and each parent EHR. Validation takes place in repeated loops, with improvements made over time. When automated abstraction reaches the current benchmark for abstraction accuracy - 95% - itwill ‘go-live’ at each site. Progress to Date: A technical analysis was completed at 14 sites. Five sites are contributing; the remaining sites prioritized meeting Meaningful Use criteria. Participating sites are contributing 15–18 unique data feeds, totaling 13 surgical registry use cases. Common feeds are registration, laboratory, transcription/dictation, radiology, and medications. Approximately 50% of 1,320 designated data elements are being automatically abstracted—25% from structured data; 25% from text mining. Conclusion: In semi-automating data abstraction and conducting a rigorous validation, CERTAIN investigators will semi-automate data collection to conduct QI and CER, while advancing the Learning Healthcare System. PMID:25848565
Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang
2017-06-01
To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.
Managing laboratory automation in a changing pharmaceutical industry
Rutherford, Michael L.
1995-01-01
The health care reform movement in the USA and increased requirements by regulatory agencies continue to have a major impact on the pharmaceutical industry and the laboratory. Laboratory management is expected to improve effciency by providing more analytical results at a lower cost, increasing customer service, reducing cycle time, while ensuring accurate results and more effective use of their staff. To achieve these expectations, many laboratories are using robotics and automated work stations. Establishing automated systems presents many challenges for laboratory management, including project and hardware selection, budget justification, implementation, validation, training, and support. To address these management challenges, the rationale for project selection and implementation, the obstacles encountered, project outcome, and learning points for several automated systems recently implemented in the Quality Control Laboratories at Eli Lilly are presented. PMID:18925014
Automated Training Evaluation (ATE). Final Report.
ERIC Educational Resources Information Center
Charles, John P.; Johnson, Robert M.
The automation of weapons system training presents the potential for significant savings in training costs in terms of manpower, time, and money. The demonstration of the technical feasibility of automated training through the application of advanced digital computer techniques and advanced training techniques is essential before the application…
A new web-based system to improve the monitoring of snow avalanche hazard in France
NASA Astrophysics Data System (ADS)
Bourova, Ekaterina; Maldonado, Eric; Leroy, Jean-Baptiste; Alouani, Rachid; Eckert, Nicolas; Bonnefoy-Demongeot, Mylene; Deschatres, Michael
2016-05-01
Snow avalanche data in the French Alps and Pyrenees have been recorded for more than 100 years in several databases. The increasing amount of observed data required a more integrative and automated service. Here we report the comprehensive web-based Snow Avalanche Information System newly developed to this end for three important data sets: an avalanche chronicle (Enquête Permanente sur les Avalanches, EPA), an avalanche map (Carte de Localisation des Phénomènes d'Avalanche, CLPA) and a compilation of hazard and vulnerability data recorded on selected paths endangering human settlements (Sites Habités Sensibles aux Avalanches, SSA). These data sets are now integrated into a common database, enabling full interoperability between all different types of snow avalanche records: digitized geographic data, avalanche descriptive parameters, eyewitness reports, photographs, hazard and risk levels, etc. The new information system is implemented through modular components using Java-based web technologies with Spring and Hibernate frameworks. It automates the manual data entry and improves the process of information collection and sharing, enhancing user experience and data quality, and offering new outlooks to explore and exploit the huge amount of snow avalanche data available for fundamental research and more applied risk assessment.
Saha, Sajib Kumar; Fernando, Basura; Cuadros, Jorge; Xiao, Di; Kanagasingam, Yogesan
2018-04-27
Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.
Reiner, Bruce I
2018-02-01
One method for addressing existing peer review limitations is the assignment of peer review cases on a completely blinded basis, in which the peer reviewer would create an independent report which can then be cross-referenced with the primary reader report of record. By leveraging existing computerized data mining techniques, one could in theory automate and objectify the process of report data extraction, classification, and analysis, while reducing time and resource requirements intrinsic to manual peer review report analysis. Once inter-report analysis has been performed, resulting inter-report discrepancies can be presented to the radiologist of record for review, along with the option to directly communicate with the peer reviewer through an electronic data reconciliation tool aimed at collaboratively resolving inter-report discrepancies and improving report accuracy. All associated report and reconciled data could in turn be recorded in a referenceable peer review database, which provides opportunity for context and user-specific education and decision support.
Improving NIR model for the prediction of cotton fiber strength
USDA-ARS?s Scientific Manuscript database
Cotton fiber strength is an important quality characteristic that is directly related to the manufacturing of quality consumer goods. Currently, two types of instruments have been implemented to assess cotton fiber strength, namely, the automation oriented high volume instrument (HVI) and the labora...
Correlation of HVI vs. stelometer fiber strength and its application
USDA-ARS?s Scientific Manuscript database
Cotton fiber strength is an important quality characteristic that is directly related to the manufacturing of quality consumer goods. Currently, two types of instruments have been implemented to assess cotton fiber strength, namely, the automation oriented HVI and the laboratory based Stelometer. Ea...
ECONOMICS OF SAMPLE COMPOSITING AS A SCREENING TOOL IN GROUND WATER QUALITY MONITORING
Recent advances in high throughput/automated compositing with robotics/field-screening methods offer seldom-tapped opportunities for achieving cost-reduction in ground water quality monitoring programs. n economic framework is presented in this paper for the evaluation of sample ...
Peterson, Kevin J.; Pathak, Jyotishman
2014-01-01
Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
Tomperi, Jani; Leiviskä, Kauko
2018-06-01
Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
NASA Astrophysics Data System (ADS)
Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.
2013-12-01
National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulsh, M.; Wheeler, D.; Protopappas, P.
The U.S. Department of Energy (DOE) is interested in supporting manufacturing research and development (R&D) for fuel cell systems in the 10-1,000 kilowatt (kW) power range relevant to stationary and distributed combined heat and power applications, with the intent to reduce manufacturing costs and increase production throughput. To assist in future decision-making, DOE requested that the National Renewable Energy Laboratory (NREL) provide a baseline understanding of the current levels of adoption of automation in manufacturing processes and flow, as well as of continuous processes. NREL identified and visited or interviewed key manufacturers, universities, and laboratories relevant to the study usingmore » a standard questionnaire. The questionnaire covered the current level of vertical integration, the importance of quality control developments for automation, the current level of automation and source of automation design, critical balance of plant issues, potential for continuous cell manufacturing, key manufacturing steps or processes that would benefit from DOE support for manufacturing R&D, the potential for cell or stack design changes to support automation, and the relationship between production volume and decisions on automation.« less
Automated Test Case Generation for an Autopilot Requirement Prototype
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael
2011-01-01
Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.
Volpe Center Report on Advanced Automation System Benefit-Cost Study: Final Report
DOT National Transportation Integrated Search
1993-10-25
The Volpe Center study of the benefits and costs of the AAS approached the analysis by segments rather than as a whole system. The study concentrated on the automation aspects of the A TC system and applied conservative assumptions to the estimation ...