Sample records for identification automation study

  1. A Unique Automation Platform for Measuring Low Level Radioactivity in Metabolite Identification Studies

    PubMed Central

    Krauser, Joel; Walles, Markus; Wolf, Thierry; Graf, Daniel; Swart, Piet

    2012-01-01

    Generation and interpretation of biotransformation data on drugs, i.e. identification of physiologically relevant metabolites, defining metabolic pathways and elucidation of metabolite structures, have become increasingly important to the drug development process. Profiling using 14C or 3H radiolabel is defined as the chromatographic separation and quantification of drug-related material in a given biological sample derived from an in vitro, preclinical in vivo or clinical study. Metabolite profiling is a very time intensive activity, particularly for preclinical in vivo or clinical studies which have defined limitations on radiation burden and exposure levels. A clear gap exists for certain studies which do not require specialized high volume automation technologies, yet these studies would still clearly benefit from automation. Use of radiolabeled compounds in preclinical and clinical ADME studies, specifically for metabolite profiling and identification are a very good example. The current lack of automation for measuring low level radioactivity in metabolite profiling requires substantial capacity, personal attention and resources from laboratory scientists. To help address these challenges and improve efficiency, we have innovated, developed and implemented a novel and flexible automation platform that integrates a robotic plate handling platform, HPLC or UPLC system, mass spectrometer and an automated fraction collector. PMID:22723932

  2. How automated image analysis techniques help scientists in species identification and classification?

    PubMed

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  3. Automated Drug Identification for Urban Hospitals

    NASA Technical Reports Server (NTRS)

    Shirley, Donna L.

    1971-01-01

    Many urban hospitals are becoming overloaded with drug abuse cases requiring chemical analysis for identification of drugs. In this paper, the requirements for chemical analysis of body fluids for drugs are determined and a system model for automated drug analysis is selected. The system as modeled, would perform chemical preparation of samples, gas-liquid chromatographic separation of drugs in the chemically prepared samples, infrared spectrophotometric analysis of the drugs, and would utilize automatic data processing and control for drug identification. Requirements of cost, maintainability, reliability, flexibility, and operability are considered.

  4. Real-time bioacoustics monitoring and automated species identification.

    PubMed

    Aide, T Mitchell; Corrada-Bravo, Carlos; Campos-Cerqueira, Marconi; Milan, Carlos; Vega, Giovany; Alvarez, Rafael

    2013-01-01

    Traditionally, animal species diversity and abundance is assessed using a variety of methods that are generally costly, limited in space and time, and most importantly, they rarely include a permanent record. Given the urgency of climate change and the loss of habitat, it is vital that we use new technologies to improve and expand global biodiversity monitoring to thousands of sites around the world. In this article, we describe the acoustical component of the Automated Remote Biodiversity Monitoring Network (ARBIMON), a novel combination of hardware and software for automating data acquisition, data management, and species identification based on audio recordings. The major components of the cyberinfrastructure include: a solar powered remote monitoring station that sends 1-min recordings every 10 min to a base station, which relays the recordings in real-time to the project server, where the recordings are processed and uploaded to the project website (arbimon.net). Along with a module for viewing, listening, and annotating recordings, the website includes a species identification interface to help users create machine learning algorithms to automate species identification. To demonstrate the system we present data on the vocal activity patterns of birds, frogs, insects, and mammals from Puerto Rico and Costa Rica.

  5. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 8: Measures of effectiveness

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The development of both quantitative criteria that were used to evaluate conceptional systems for automating the functions for the FBI Identification Division is described. Specific alternative systems for automation were compared by using these developed criteria, defined as Measures of Effectiveness (MOE), to gauge system's performance in attempting to achieve certain goals. The MOE, essentially measurement tools that were developed through the combination of suitable parameters, pertain to each conceivable area of system operation. The methods and approaches used, both in selecting the parameters and in using the resulting MOE, are described.

  6. Performance evaluation of three automated identification systems in detecting carbapenem-resistant Enterobacteriaceae.

    PubMed

    He, Qingwen; Chen, Weiyuan; Huang, Liya; Lin, Qili; Zhang, Jingling; Liu, Rui; Li, Bin

    2016-06-21

    Carbapenem-resistant Enterobacteriaceae (CRE) is prevalent around the world. Rapid and accurate detection of CRE is urgently needed to provide effective treatment. Automated identification systems have been widely used in clinical microbiology laboratories for rapid and high-efficient identification of pathogenic bacteria. However, critical evaluation and comparison are needed to determine the specificity and accuracy of different systems. The aim of this study was to evaluate the performance of three commonly used automated identification systems on the detection of CRE. A total of 81 non-repetitive clinical CRE isolates were collected from August 2011 to August 2012 in a Chinese university hospital, and all the isolates were confirmed to be resistant to carbapenems by the agar dilution method. The potential presence of carbapenemase genotypes of the 81 isolates was detected by PCR and sequencing. Using 81 clinical CRE isolates, we evaluated and compared the performance of three automated identification systems, MicroScan WalkAway 96 Plus, Phoenix 100, and Vitek 2 Compact, which are commonly used in China. To identify CRE, the comparator methodology was agar dilution method, while the PCR and sequencing was the comparator one to identify CPE. PCR and sequencing analysis showed that 48 of the 81 CRE isolates carried carbapenemase genes, including 23 (28.4 %) IMP-4, 14 (17.3 %) IMP-8, 5 (6.2 %) NDM-1, and 8 (9.9 %) KPC-2. Notably, one Klebsiella pneumoniae isolate produced both IMP-4 and NDM-1. One Klebsiella oxytoca isolate produced both KPC-2 and IMP-8. Of the 81 clinical CRE isolates, 56 (69.1 %), 33 (40.7 %) and 77 (95.1 %) were identified as CRE by MicroScan WalkAway 96 Plus, Phoenix 100, and Vitek 2 Compact, respectively. The sensitivities/specificities of MicroScan WalkAway, Phoenix 100 and Vitek 2 were 93.8/42.4 %, 54.2/66.7 %, and 75.0/36.4 %, respectively. The MicroScan WalkAway and Viteck2 systems are more reliable in clinical identification of

  7. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 7: Top down functional analysis

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The functions are identified and described in chart form as a tree in which the basic functions, to 'Provide National Identification Service,' are shown at the top. The lower levels of the tree branch out to indicate functions and sub-functions. Symbols are used to indicate whether or not a function was automated in the AIDS 1 or 2 system or is planned to be automated in the AIDS 3 system. The tree chart is shown in detail.

  8. Automated identification of Monogeneans using digital image processing and K-nearest neighbour approaches.

    PubMed

    Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur

    2016-12-22

    Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in

  9. Automated Microbiological Detection/Identification System

    PubMed Central

    Aldridge, C.; Jones, P. W.; Gibson, S.; Lanham, J.; Meyer, M.; Vannest, R.; Charles, R.

    1977-01-01

    An automated, computerized system, the AutoMicrobic System, has been developed for the detection, enumeration, and identification of bacteria and yeasts in clinical specimens. The biological basis for the system resides in lyophilized, highly selective and specific media enclosed in wells of a disposable plastic cuvette; introduction of a suitable specimen rehydrates and inoculates the media in the wells. An automated optical system monitors, and the computer interprets, changes in the media, with enumeration and identification results automatically obtained in 13 h. Sixteen different selective media were developed and tested with a variety of seeded (simulated) and clinical specimens. The AutoMicrobic System has been extensively tested with urine specimens, using a urine test kit (Identi-Pak) that contains selective media for Escherichia coli, Proteus species, Pseudomonas aeruginosa, Klebsiella-Enterobacter species, Serratia species, Citrobacter freundii, group D enterococci, Staphylococcus aureus, and yeasts (Candida species and Torulopsis glabrata). The system has been tested with 3,370 seeded urine specimens and 1,486 clinical urines. Agreement with simultaneous conventional (manual) cultures, at levels of 70,000 colony-forming units per ml (or more), was 92% or better for seeded specimens; clinical specimens yielded results of 93% or better for all organisms except P. aeruginosa, where agreement was 86%. System expansion in progress includes antibiotic susceptibility testing and compatibility with most types of clinical specimens. Images PMID:334798

  10. Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2: Initial Operational Test and Evaluation Report

    DTIC Science & Technology

    2015-05-01

    Director, Operational Test and Evaluation Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial...Operational Test and Evaluation Report May 2015 This report on the Department of Defense (DOD) Automated Biometric Identification System...COVERED - 4. TITLE AND SUBTITLE Department of Defense (DOD) Automated Biometric Identification System (ABIS) Version 1.2 Initial Operational Test

  11. Going deeper in the automated identification of Herbarium specimens.

    PubMed

    Carranza-Rojas, Jose; Goeau, Herve; Bonnet, Pierre; Mata-Montero, Erick; Joly, Alexis

    2017-08-11

    Hundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries. Recent initiatives started ambitious preservation plans to digitize this information and make it available to botanists and the general public through web portals. However, thousands of sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time. Computer vision and machine learning approaches applied to herbarium sheets are promising but are still not well studied compared to automated species identification from leaf scans or pictures of plants in the field. In this work, we propose to study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology. In addition, we propose to study if the combination of herbarium sheets with photos of plants in the field is relevant in terms of accuracy, and finally, we explore if herbarium images from one region that has one specific flora can be used to do transfer learning to another region with other species; for example, on a region under-represented in terms of collected data. This is, to our knowledge, the first study that uses deep learning to analyze a big dataset with thousands of species from herbaria. Results show the potential of Deep Learning on herbarium species identification, particularly by training and testing across different datasets from different herbaria. This could potentially lead to the creation of a semi, or even fully automated system to help taxonomists and experts with their annotation, classification, and revision works.

  12. BoB, a best-of-breed automated text de-identification system for VHA clinical documents.

    PubMed

    Ferrández, Oscar; South, Brett R; Shen, Shuying; Friedlin, F Jeffrey; Samore, Matthew H; Meystre, Stéphane M

    2013-01-01

    De-identification allows faster and more collaborative clinical research while protecting patient confidentiality. Clinical narrative de-identification is a tedious process that can be alleviated by automated natural language processing methods. The goal of this research is the development of an automated text de-identification system for Veterans Health Administration (VHA) clinical documents. We devised a novel stepwise hybrid approach designed to improve the current strategies used for text de-identification. The proposed system is based on a previous study on the best de-identification methods for VHA documents. This best-of-breed automated clinical text de-identification system (aka BoB) tackles the problem as two separate tasks: (1) maximize patient confidentiality by redacting as much protected health information (PHI) as possible; and (2) leave de-identified documents in a usable state preserving as much clinical information as possible. We evaluated BoB with a manually annotated corpus of a variety of VHA clinical notes, as well as with the 2006 i2b2 de-identification challenge corpus. We present evaluations at the instance- and token-level, with detailed results for BoB's main components. Moreover, an existing text de-identification system was also included in our evaluation. BoB's design efficiently takes advantage of the methods implemented in its pipeline, resulting in high sensitivity values (especially for sensitive PHI categories) and a limited number of false positives. Our system successfully addressed VHA clinical document de-identification, and its hybrid stepwise design demonstrates robustness and efficiency, prioritizing patient confidentiality while leaving most clinical information intact.

  13. Semi-Automated Identification of Rocks in Images

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Castano, Andres; Anderson, Robert

    2006-01-01

    Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.

  14. Comparison of the techniques for the identification of the epidural space using the loss-of-resistance technique or an automated syringe - results of a randomized double-blind study.

    PubMed

    Duniec, Larysa; Nowakowski, Piotr; Sieczko, Jakub; Chlebus, Marcin; Łazowski, Tomasz

    2016-01-01

    The conventional, loss of resistance technique for identification of the epidural space is highly dependent on the anaesthetist's personal experience and is susceptible to technical errors. Therefore, an alternative, automated technique was devised to overcome the drawbacks of the traditional method. The aim of the study was to compare the efficacy of epidural space identification and the complication rate between the two groups - the automatic syringe and conventional loss of resistance methods. 47 patients scheduled for orthopaedic and gynaecology procedures under epidural anaesthesia were enrolled into the study. The number of attempts, ease of epidural space identification, complication rate and the patients' acceptance regarding the two techniques were evaluated. The majority of blocks were performed by trainee anaesthetists (91.5%). No statistical difference was found between the number of needle insertion attempts (1 vs. 2), the efficacy of epidural anaesthesia or the number of complications between the groups. The ease of epidural space identification, as assessed by an anaesthetist, was significantly better (P = 0.011) in the automated group (87.5% vs. 52.4%). A similar number of patients (92% vs. 94%) in both groups stated they would accept epidural anaesthesia in the future. The automated and loss of resistance methods of epidural space identification were proved to be equivalent in terms of efficacy and safety. Since the use of the automated technique may facilitate epidural space identification, it may be regarded as useful technique for anaesthetists inexperienced in epidural anaesthesia, or for trainees.

  15. AUTOMATED BIOCHEMICAL IDENTIFICATION OF BACTERIAL FISH PATHOGENS USING THE ABBOTT QUANTUM II

    EPA Science Inventory

    The Quantum II, originally designed by Abbott Diagnostics for automated rapid identification of members of Enterobacteriaceae, was adapted for the identification of bacterial fish pathogens. he instrument operates as a spectrophotometer at a wavelength of 492.600 nm. ample cartri...

  16. Reliability of automated biochemical identification of Burkholderia pseudomallei is regionally dependent.

    PubMed

    Podin, Yuwana; Kaestli, Mirjam; McMahon, Nicole; Hennessy, Jann; Ngian, Hie Ung; Wong, Jin Shyan; Mohana, Anand; Wong, See Chang; William, Timothy; Mayo, Mark; Baird, Robert W; Currie, Bart J

    2013-09-01

    Misidentifications of Burkholderia pseudomallei as Burkholderia cepacia by Vitek 2 have occurred. Multidimensional scaling ordination of biochemical profiles of 217 Malaysian and Australian B. pseudomallei isolates found clustering of misidentified B. pseudomallei isolates from Malaysian Borneo. Specificity of B. pseudomallei identification in Vitek 2 and potentially other automated identification systems is regionally dependent.

  17. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting.

    PubMed

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-09-01

    Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.

  18. Reliability of Automated Biochemical Identification of Burkholderia pseudomallei Is Regionally Dependent

    PubMed Central

    Podin, Yuwana; Kaestli, Mirjam; McMahon, Nicole; Hennessy, Jann; Ngian, Hie Ung; Wong, Jin Shyan; Mohana, Anand; Wong, See Chang; William, Timothy; Mayo, Mark; Baird, Robert W.

    2013-01-01

    Misidentifications of Burkholderia pseudomallei as Burkholderia cepacia by Vitek 2 have occurred. Multidimensional scaling ordination of biochemical profiles of 217 Malaysian and Australian B. pseudomallei isolates found clustering of misidentified B. pseudomallei isolates from Malaysian Borneo. Specificity of B. pseudomallei identification in Vitek 2 and potentially other automated identification systems is regionally dependent. PMID:23784129

  19. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting

    PubMed Central

    Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny

    2017-01-01

    Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics. PMID:28873173

  20. Critical Assessment of Small Molecule Identification 2016: automated methods.

    PubMed

    Schymanski, Emma L; Ruttkies, Christoph; Krauss, Martin; Brouard, Céline; Kind, Tobias; Dührkop, Kai; Allen, Felicity; Vaniya, Arpana; Verdegem, Dries; Böcker, Sebastian; Rousu, Juho; Shen, Huibin; Tsugawa, Hiroshi; Sajed, Tanvir; Fiehn, Oliver; Ghesquière, Bart; Neumann, Steffen

    2017-03-27

    The fourth round of the Critical Assessment of Small Molecule Identification (CASMI) Contest ( www.casmi-contest.org ) was held in 2016, with two new categories for automated methods. This article covers the 208 challenges in Categories 2 and 3, without and with metadata, from organization, participation, results and post-contest evaluation of CASMI 2016 through to perspectives for future contests and small molecule annotation/identification. The Input Output Kernel Regression (CSI:IOKR) machine learning approach performed best in "Category 2: Best Automatic Structural Identification-In Silico Fragmentation Only", won by Team Brouard with 41% challenge wins. The winner of "Category 3: Best Automatic Structural Identification-Full Information" was Team Kind (MS-FINDER), with 76% challenge wins. The best methods were able to achieve over 30% Top 1 ranks in Category 2, with all methods ranking the correct candidate in the Top 10 in around 50% of challenges. This success rate rose to 70% Top 1 ranks in Category 3, with candidates in the Top 10 in over 80% of the challenges. The machine learning and chemistry-based approaches are shown to perform in complementary ways. The improvement in (semi-)automated fragmentation methods for small molecule identification has been substantial. The achieved high rates of correct candidates in the Top 1 and Top 10, despite large candidate numbers, open up great possibilities for high-throughput annotation of untargeted analysis for "known unknowns". As more high quality training data becomes available, the improvements in machine learning methods will likely continue, but the alternative approaches still provide valuable complementary information. Improved integration of experimental context will also improve identification success further for "real life" annotations. The true "unknown unknowns" remain to be evaluated in future CASMI contests. Graphical abstract .

  1. Automated species-level identification and segmentation of planktonic foraminifera using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Marchitto, T. M., Jr.; Mitra, R.; Zhong, B.; Ge, Q.; Kanakiya, B.; Lobaton, E.

    2017-12-01

    Identification and picking of foraminifera from sediment samples is often a laborious and repetitive task. Previous attempts to automate this process have met with limited success, but we show that recent advances in machine learning can be brought to bear on the problem. As a `proof of concept' we have developed a system that is capable of recognizing six species of extant planktonic foraminifera that are commonly used in paleoceanographic studies. Our pipeline begins with digital photographs taken under 16 different illuminations using an LED ring, which are then fused into a single 3D image. Labeled image sets were used to train various types of image classification algorithms, and performance on unlabeled image sets was measured in terms of precision (whether IDs are correct) and recall (what fraction of the target species are found). We find that Convolutional Neural Network (CNN) approaches achieve precision and recall values between 80 and 90%, which is similar precision and better recall than human expert performance using the same type of photographs. We have also trained a CNN to segment the 3D images into individual chambers and apertures, which can not only improve identification performance but also automate the measurement of foraminifera for morphometric studies. Given that there are only 35 species of extant planktonic foraminifera larger than 150 μm, we suggest that a fully automated characterization of this assemblage is attainable. This is the first step toward the realization of a foram picking robot.

  2. Improvement of Automated Identification of the Heart Wall in Echocardiography by Suppressing Clutter Component

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2013-07-01

    For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.

  3. [A comparative study between the Vitek YBC and Microscan Walk Away RYID automated systems with conventional phenotypic methods for the identification of yeasts of clinical interest].

    PubMed

    Ferrara, Giuseppe; Mercedes Panizol, Maria; Mazzone, Marja; Delia Pequeneze, Maria; Reviakina, Vera

    2014-12-01

    The aim of this study was to compare the identification of clin- ically relevant yeasts by the Vitek YBC and Microscan Walk Away RYID automated methods with conventional phenotypic methods. One hundred and ninety three yeast strains isolated from clinical samples and five controls strains were used. All the yeasts were identified by the automated methods previously mentioned and conventional phenotypic methods such as carbohydrate assimilation, visualization of microscopic morphology on corn meal agar and the use of chromogenic agar. Variables were assessed by 2 x 2 contingency tables, McNemar's Chi square, the Kappa index, and concordance values were calculated, as well as major and minor errors for the automated methods. Yeasts were divided into two groups: (1) frequent isolation and (2) rare isolation. The Vitek YBC and Microscan Walk Away RYID systems were concordant in 88.4 and 85.9% respectively, when compared to conventional phenotypic methods. Although both automated systems can be used for yeasts identification, the presence of major and minor errors indicates the possibility of misidentifications; therefore, the operator of this equipment must use in parallel, phenotypic tests such as visualization of microscopic morphology on corn meal agar and chromogenic agar, especially against infrequently isolated yeasts. Automated systems are a valuable tool; however, the expertise and judgment of the microbiologist are an important strength to ensure the quality of the results.

  4. FBI fingerprint identification automation study. AIDS 3 evaluation report. Volume 1: Compendium

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The primary features of the overall study are encompassed and an evaluation of an automation system is presented. Objectives of the study are described, methods of evaluation are summarized and conclusions about the system's feasibility are presented. Also included is a brief history of fingerprint automation activities within the FBI, the organization of the FBI, a bibliography of documents and records, a data dictionary and a reference set of all of the transparencies presented throughout the study.

  5. Automated Firearms Identification System (AFIDS), phase 1

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.; Framan, E. P.

    1974-01-01

    Items critical to the future development of an automated firearms identification system (AFIDS) have been examined, with the following specific results: (1) Types of objective data, that can be utilized to help establish a more factual basis for determining identity and nonidentity between pairs of fired bullets, have been identified. (2) A simulation study has indicated that randomly produced lines, similar in nature to the individual striations on a fired bullet, can be modeled and that random sequences, when compared to each other, have predictable relationships. (3) A schematic diagram of the general concept for AFIDS has been developed and individual elements of this system have been briefly tested for feasibility. Future implementation of such a proposed system will depend on such factors as speed, utility, projected total cost and user requirements for growth. The success of the proposed system, when operational, would depend heavily on existing firearms examiners.

  6. COMPARISON BETWEEN AUTOMATED SYSTEM AND PCR-BASED METHOD FOR IDENTIFICATION AND ANTIMICROBIAL SUSCEPTIBILITY PROFILE OF CLINICAL Enterococcus spp

    PubMed Central

    Furlaneto-Maia, Luciana; Rocha, Kátia Real; Siqueira, Vera Lúcia Dias; Furlaneto, Márcia Cristina

    2014-01-01

    Enterococci are increasingly responsible for nosocomial infections worldwide. This study was undertaken to compare the identification and susceptibility profile using an automated MicrosScan system, PCR-based assay and disk diffusion assay of Enterococcus spp. We evaluated 30 clinical isolates of Enterococcus spp. Isolates were identified by MicrosScan system and PCR-based assay. The detection of antibiotic resistance genes (vancomycin, gentamicin, tetracycline and erythromycin) was also determined by PCR. Antimicrobial susceptibilities to vancomycin (30 µg), gentamicin (120 µg), tetracycline (30 µg) and erythromycin (15 µg) were tested by the automated system and disk diffusion method, and were interpreted according to the criteria recommended in CLSI guidelines. Concerning Enterococcus identification the general agreement between data obtained by the PCR method and by the automatic system was 90.0% (27/30). For all isolates of E. faecium and E. faecalis we observed 100% agreement. Resistance frequencies were higher in E. faecium than E. faecalis. The resistance rates obtained were higher for erythromycin (86.7%), vancomycin (80.0%), tetracycline (43.35) and gentamicin (33.3%). The correlation between disk diffusion and automation revealed an agreement for the majority of the antibiotics with category agreement rates of > 80%. The PCR-based assay, the van(A) gene was detected in 100% of vancomycin resistant enterococci. This assay is simple to conduct and reliable in the identification of clinically relevant enterococci. The data obtained reinforced the need for an improvement of the automated system to identify some enterococci. PMID:24626409

  7. Manta Matcher: automated photographic identification of manta rays using keypoint features.

    PubMed

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-07-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been

  8. Manta Matcher: automated photographic identification of manta rays using keypoint features

    PubMed Central

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-01-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been

  9. galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.

    PubMed

    Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M

    2004-06-12

    The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se

  10. Fully-automated identification of fish species based on otolith contour: using short-time Fourier transform and discriminant analysis (STFT-DA).

    PubMed

    Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching

    2016-01-01

    Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.

  11. Automated protein identification by the combination of MALDI MS and MS/MS spectra from different instruments.

    PubMed

    Levander, Fredrik; James, Peter

    2005-01-01

    The identification of proteins separated on two-dimensional gels is most commonly performed by trypsin digestion and subsequent matrix-assisted laser desorption ionization (MALDI) with time-of-flight (TOF). Recently, atmospheric pressure (AP) MALDI coupled to an ion trap (IT) has emerged as a convenient method to obtain tandem mass spectra (MS/MS) from samples on MALDI target plates. In the present work, we investigated the feasibility of using the two methodologies in line as a standard method for protein identification. In this setup, the high mass accuracy MALDI-TOF spectra are used to calibrate the peptide precursor masses in the lower mass accuracy AP-MALDI-IT MS/MS spectra. Several software tools were developed to automate the analysis process. Two sets of MALDI samples, consisting of 142 and 421 gel spots, respectively, were analyzed in a highly automated manner. In the first set, the protein identification rate increased from 61% for MALDI-TOF only to 85% for MALDI-TOF combined with AP-MALDI-IT. In the second data set the increase in protein identification rate was from 44% to 58%. AP-MALDI-IT MS/MS spectra were in general less effective than the MALDI-TOF spectra for protein identification, but the combination of the two methods clearly enhanced the confidence in protein identification.

  12. Automating concept identification in the electronic medical record: an experiment in extracting dosage information.

    PubMed Central

    Evans, D. A.; Brownlow, N. D.; Hersh, W. R.; Campbell, E. M.

    1996-01-01

    We discuss the development and evaluation of an automated procedure for extracting drug-dosage information from clinical narratives. The process was developed rapidly using existing technology and resources, including categories of terms from UMLS96. Evaluations over a large training and smaller test set of medical records demonstrate an approximately 80% rate of exact and partial matches' on target phrases, with few false positives and a modest rate of false negatives. The results suggest a strategy for automating general concept identification in electronic medical records. PMID:8947694

  13. Automated identification of basalt spectra in Clementine lunar data

    NASA Astrophysics Data System (ADS)

    Antonenko, I.; Osinski, G. R.

    2011-06-01

    The identification of fresh basalt spectra plays an important role in lunar stratigraphic studies; however, the process can be time consuming and labor intensive. Thus motivated, we developed an empirically derived algorithm for the automated identification of fresh basalt spectra from Clememtine UVVIS data. This algorithm has the following four parameters and limits: BC Ratio=3(R950-R900)/(R900-R750)<1.1, CD Delta=(R1000-R950)/R750-1.09(R950-R900)/R750>0.003 and <0.06, B Slope=(R900-R750)/(3R750)<-0.012, and Band Depth=(R750-R950)/(R750-R415)>0.1, where R750 represents the unnormalized reflectance of the 750 nm Clementine band, and so on. Algorithm results were found to be accurate to within an error of 4.5% with respect to visual classification, though olivine spectra may be under-represented. Overall, fresh basalts identified by the algorithm are consistent with expectations and previous work in the Mare Humorum area, though accuracy in other areas has not yet been tested. Great potential exists in using this algorithm for identifying craters that have excavated basalts, estimating the thickness of mare and cryptomare deposits, and other applications.

  14. Performance of optimized McRAPD in identification of 9 yeast species frequently isolated from patient samples: potential for automation.

    PubMed

    Trtkova, Jitka; Pavlicek, Petr; Ruskova, Lenka; Hamal, Petr; Koukalova, Dagmar; Raclavsky, Vladislav

    2009-11-10

    Rapid, easy, economical and accurate species identification of yeasts isolated from clinical samples remains an important challenge for routine microbiological laboratories, because susceptibility to antifungal agents, probability to develop resistance and ability to cause disease vary in different species. To overcome the drawbacks of the currently available techniques we have recently proposed an innovative approach to yeast species identification based on RAPD genotyping and termed McRAPD (Melting curve of RAPD). Here we have evaluated its performance on a broader spectrum of clinically relevant yeast species and also examined the potential of automated and semi-automated interpretation of McRAPD data for yeast species identification. A simple fully automated algorithm based on normalized melting data identified 80% of the isolates correctly. When this algorithm was supplemented by semi-automated matching of decisive peaks in first derivative plots, 87% of the isolates were identified correctly. However, a computer-aided visual matching of derivative plots showed the best performance with average 98.3% of the accurately identified isolates, almost matching the 99.4% performance of traditional RAPD fingerprinting. Since McRAPD technique omits gel electrophoresis and can be performed in a rapid, economical and convenient way, we believe that it can find its place in routine identification of medically important yeasts in advanced diagnostic laboratories that are able to adopt this technique. It can also serve as a broad-range high-throughput technique for epidemiological surveillance.

  15. Semi-automated De-identification of German Content Sensitive Reports for Big Data Analytics.

    PubMed

    Seuss, Hannes; Dankerl, Peter; Ihle, Matthias; Grandjean, Andrea; Hammon, Rebecca; Kaestle, Nicola; Fasching, Peter A; Maier, Christian; Christoph, Jan; Sedlmayr, Martin; Uder, Michael; Cavallaro, Alexander; Hammon, Matthias

    2017-07-01

    Purpose  Projects involving collaborations between different institutions require data security via selective de-identification of words or phrases. A semi-automated de-identification tool was developed and evaluated on different types of medical reports natively and after adapting the algorithm to the text structure. Materials and Methods  A semi-automated de-identification tool was developed and evaluated for its sensitivity and specificity in detecting sensitive content in written reports. Data from 4671 pathology reports (4105 + 566 in two different formats), 2804 medical reports, 1008 operation reports, and 6223 radiology reports of 1167 patients suffering from breast cancer were de-identified. The content was itemized into four categories: direct identifiers (name, address), indirect identifiers (date of birth/operation, medical ID, etc.), medical terms, and filler words. The software was tested natively (without training) in order to establish a baseline. The reports were manually edited and the model re-trained for the next test set. After manually editing 25, 50, 100, 250, 500 and if applicable 1000 reports of each type re-training was applied. Results  In the native test, 61.3 % of direct and 80.8 % of the indirect identifiers were detected. The performance (P) increased to 91.4 % (P25), 96.7 % (P50), 99.5 % (P100), 99.6 % (P250), 99.7 % (P500) and 100 % (P1000) for direct identifiers and to 93.2 % (P25), 97.9 % (P50), 97.2 % (P100), 98.9 % (P250), 99.0 % (P500) and 99.3 % (P1000) for indirect identifiers. Without training, 5.3 % of medical terms were falsely flagged as critical data. The performance increased, after training, to 4.0 % (P25), 3.6 % (P50), 4.0 % (P100), 3.7 % (P250), 4.3 % (P500), and 3.1 % (P1000). Roughly 0.1 % of filler words were falsely flagged. Conclusion  Training of the developed de-identification tool continuously improved its performance. Training with roughly 100 edited

  16. Software automation tools for increased throughput metabolic soft-spot identification in early drug discovery.

    PubMed

    Zelesky, Veronica; Schneider, Richard; Janiszewski, John; Zamora, Ismael; Ferguson, James; Troutman, Matthew

    2013-05-01

    The ability to supplement high-throughput metabolic clearance data with structural information defining the site of metabolism should allow design teams to streamline their synthetic decisions. However, broad application of metabolite identification in early drug discovery has been limited, largely due to the time required for data review and structural assignment. The advent of mass defect filtering and its application toward metabolite scouting paved the way for the development of software automation tools capable of rapidly identifying drug-related material in complex biological matrices. Two semi-automated commercial software applications, MetabolitePilot™ and Mass-MetaSite™, were evaluated to assess the relative speed and accuracy of structural assignments using data generated on a high-resolution MS platform. Review of these applications has demonstrated their utility in providing accurate results in a time-efficient manner, leading to acceleration of metabolite identification initiatives while highlighting the continued need for biotransformation expertise in the interpretation of more complex metabolic reactions.

  17. Semi-automated identification of cones in the human retina using circle Hough transform

    PubMed Central

    Bukowska, Danuta M.; Chew, Avenell L.; Huynh, Emily; Kashani, Irwin; Wan, Sue Ling; Wan, Pak Ming; Chen, Fred K

    2015-01-01

    A large number of human retinal diseases are characterized by a progressive loss of cones, the photoreceptors critical for visual acuity and color perception. Adaptive Optics (AO) imaging presents a potential method to study these cells in vivo. However, AO imaging in ophthalmology is a relatively new phenomenon and quantitative analysis of these images remains difficult and tedious using manual methods. This paper illustrates a novel semi-automated quantitative technique enabling registration of AO images to macular landmarks, cone counting and its radius quantification at specified distances from the foveal center. The new cone counting approach employs the circle Hough transform (cHT) and is compared to automated counting methods, as well as arbitrated manual cone identification. We explore the impact of varying the circle detection parameter on the validity of cHT cone counting and discuss the potential role of using this algorithm in detecting both cones and rods separately. PMID:26713186

  18. Automated identification of molecular effects of drugs (AIMED)

    PubMed Central

    Fathiamini, Safa; Johnson, Amber M; Zeng, Jia; Araya, Alejandro; Holla, Vijaykumar; Bailey, Ann M; Litzenburger, Beate C; Sanchez, Nora S; Khotskaya, Yekaterina; Xu, Hua; Meric-Bernstam, Funda; Bernstam, Elmer V

    2016-01-01

    Introduction Genomic profiling information is frequently available to oncologists, enabling targeted cancer therapy. Because clinically relevant information is rapidly emerging in the literature and elsewhere, there is a need for informatics technologies to support targeted therapies. To this end, we have developed a system for Automated Identification of Molecular Effects of Drugs, to help biomedical scientists curate this literature to facilitate decision support. Objectives To create an automated system to identify assertions in the literature concerning drugs targeting genes with therapeutic implications and characterize the challenges inherent in automating this process in rapidly evolving domains. Methods We used subject-predicate-object triples (semantic predications) and co-occurrence relations generated by applying the SemRep Natural Language Processing system to MEDLINE abstracts and ClinicalTrials.gov descriptions. We applied customized semantic queries to find drugs targeting genes of interest. The results were manually reviewed by a team of experts. Results Compared to a manually curated set of relationships, recall, precision, and F2 were 0.39, 0.21, and 0.33, respectively, which represents a 3- to 4-fold improvement over a publically available set of predications (SemMedDB) alone. Upon review of ostensibly false positive results, 26% were considered relevant additions to the reference set, and an additional 61% were considered to be relevant for review. Adding co-occurrence data improved results for drugs in early development, but not their better-established counterparts. Conclusions Precision medicine poses unique challenges for biomedical informatics systems that help domain experts find answers to their research questions. Further research is required to improve the performance of such systems, particularly for drugs in development. PMID:27107438

  19. Radio Frequency Identification and Motion-sensitive Video Efficiently Automate Recording of Unrewarded Choice Behavior by Bumblebees

    PubMed Central

    Orbán, Levente L.; Plowright, Catherine M.S.

    2014-01-01

    We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., “busyness” of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone. PMID:25489677

  20. Radio Frequency Identification and motion-sensitive video efficiently automate recording of unrewarded choice behavior by bumblebees.

    PubMed

    Orbán, Levente L; Plowright, Catherine M S

    2014-11-15

    We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., "busyness" of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone.

  1. Cost effective raspberry pi-based radio frequency identification tagging of mice suitable for automated in vivo imaging.

    PubMed

    Bolaños, Federico; LeDue, Jeff M; Murphy, Timothy H

    2017-01-30

    Automation of animal experimentation improves consistency, reduces potential for error while decreasing animal stress and increasing well-being. Radio frequency identification (RFID) tagging can identify individual mice in group housing environments enabling animal-specific tracking of physiological parameters. We describe a simple protocol to radio frequency identification (RFID) tag and detect mice. RFID tags were injected sub-cutaneously after brief isoflurane anesthesia and do not require surgical steps such as suturing or incisions. We employ glass-encapsulated 125kHz tags that can be read within 30.2±2.4mm of the antenna. A raspberry pi single board computer and tag reader enable automated logging and cross platform support is possible through Python. We provide sample software written in Python to provide a flexible and cost effective system for logging the weights of multiple mice in relation to pre-defined targets. The sample software can serve as the basis of any behavioral or physiological task where users will need to identify and track specific animals. Recently, we have applied this system of tagging to automated mouse brain imaging within home-cages. We provide a cost effective solution employing open source software to facilitate adoption in applications such as automated imaging or tracking individual animal weights during tasks where food or water restriction is employed as motivation for a specific behavior. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Trust and reliance on an automated combat identification system.

    PubMed

    Wang, Lu; Jamieson, Greg A; Hollands, Justin G

    2009-06-01

    We examined the effects of aid reliability and reliability disclosure on human trust in and reliance on a combat identification (CID) aid. We tested whether trust acts as a mediating factor between belief in and reliance on a CID aid. Individual CID systems have been developed to reduce friendly fire incidents. However, these systems cannot positively identify a target that does not have a working transponder. Therefore, when the feedback is "unknown", the target could be hostile, neutral, or friendly. Soldiers have difficulty relying on this type of imperfect automation appropriately. In manual and aided conditions, 24 participants completed a simulated CID task. The reliability of the aid varied within participants, half of whom were told the aid reliability level. We used the difference in response bias values across conditions to measure automation reliance. Response bias varied more appropriately with the aid reliability level when it was disclosed than when not. Trust in aid feedback correlated with belief in aid reliability and reliance on aid feedback; however, belief was not correlated with reliance. To engender appropriate reliance on CID systems, users should be made aware of system reliability. The findings can be applied to the design of information displays for individual CID systems and soldier training.

  3. Use of the MicroSeq 500 16S rRNA Gene-Based Sequencing for Identification of Bacterial Isolates That Commercial Automated Systems Failed To Identify Correctly

    PubMed Central

    Fontana, Carla; Favaro, Marco; Pelliccioni, Marco; Pistoia, Enrico Salvatore; Favalli, Cartesio

    2005-01-01

    Reliable automated identification and susceptibility testing of clinically relevant bacteria is an essential routine for microbiology laboratories, thus improving patient care. Examples of automated identification systems include the Phoenix (Becton Dickinson) and the VITEK 2 (bioMérieux). However, more and more frequently, microbiologists must isolate “difficult” strains that automated systems often fail to identify. An alternative approach could be the genetic identification of isolates; this is based on 16S rRNA gene sequencing and analysis. The aim of the present study was to evaluate the possible use of MicroSeq 500 (Applera) for sequencing the 16S rRNA gene to identify isolates whose identification is unobtainable by conventional systems. We analyzed 83 “difficult” clinical isolates: 25 gram-positive and 58 gram-negative strains that were contemporaneously identified by both systems—VITEK 2 and Phoenix—while genetic identification was performed by using the MicroSeq 500 system. The results showed that phenotypic identifications by VITEK 2 and Phoenix were remarkably similar: 74% for gram-negative strains (43 of 58) and 80% for gram-positive strains were concordant by both systems and also concordant with genetic characterization. The exceptions were the 15 gram-negative and 9 gram-positive isolates whose phenotypic identifications were contrasting or inconclusive. For these, the use of MicroSeq 500 was fundamental to achieving species identification. In clinical microbiology the use of MicroSeq 500, particularly for strains with ambiguous biochemical profiles (including slow-growing strains), identifies strains more easily than do conventional systems. Moreover, MicroSeq 500 is easy to use and cost-effective, making it applicable also in the clinical laboratory. PMID:15695654

  4. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    NASA Technical Reports Server (NTRS)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  5. Software architecture of the III/FBI segment of the FBI's integrated automated identification system

    NASA Astrophysics Data System (ADS)

    Booker, Brian T.

    1997-02-01

    This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.

  6. Greater Buyer Effectiveness through Automation

    DTIC Science & Technology

    1989-01-01

    assignment to the buyer Coordination - automated routing of requirement package to technical, finance, transportation, packaging, small business ... security , data, safety, etc. Consolidation - automated identification of requirements for identical or similar items for potential consolidation

  7. Prospective, observational study comparing automated and visual point-of-care urinalysis in general practice

    PubMed Central

    van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier

    2016-01-01

    Objective Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. Setting The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Primary and secondary outcome measures Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Results Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Conclusions Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated

  8. 21 CFR 864.5200 - Automated cell counter.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated cell counter. 864.5200 Section 864.5200....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the...

  9. Automated colour identification in melanocytic lesions.

    PubMed

    Sabbaghi, S; Aldeen, M; Garnavi, R; Varigos, G; Doliantis, C; Nicolopoulos, J

    2015-08-01

    Colour information plays an important role in classifying skin lesion. However, colour identification by dermatologists can be very subjective, leading to cases of misdiagnosis. Therefore, a computer-assisted system for quantitative colour identification is highly desirable for dermatologists to use. Although numerous colour detection systems have been developed, few studies have focused on imitating the human visual perception of colours in melanoma application. In this paper we propose a new methodology based on QuadTree decomposition technique for automatic colour identification in dermoscopy images. Our approach mimics the human perception of lesion colours. The proposed method is trained on a set of 47 images from NIH dataset and applied to a test set of 190 skin lesions obtained from PH2 dataset. The results of our proposed method are compared with a recently reported colour identification method using the same dataset. The effectiveness of our method in detecting colours in dermoscopy images is vindicated by obtaining approximately 93% accuracy when the CIELab1 colour space is used.

  10. Text de-identification for privacy protection: a study of its impact on clinical text information content.

    PubMed

    Meystre, Stéphane M; Ferrández, Óscar; Friedlin, F Jeffrey; South, Brett R; Shen, Shuying; Samore, Matthew H

    2014-08-01

    As more and more electronic clinical information is becoming easier to access for secondary uses such as clinical research, approaches that enable faster and more collaborative research while protecting patient privacy and confidentiality are becoming more important. Clinical text de-identification offers such advantages but is typically a tedious manual process. Automated Natural Language Processing (NLP) methods can alleviate this process, but their impact on subsequent uses of the automatically de-identified clinical narratives has only barely been investigated. In the context of a larger project to develop and investigate automated text de-identification for Veterans Health Administration (VHA) clinical notes, we studied the impact of automated text de-identification on clinical information in a stepwise manner. Our approach started with a high-level assessment of clinical notes informativeness and formatting, and ended with a detailed study of the overlap of select clinical information types and Protected Health Information (PHI). To investigate the informativeness (i.e., document type information, select clinical data types, and interpretation or conclusion) of VHA clinical notes, we used five different existing text de-identification systems. The informativeness was only minimally altered by these systems while formatting was only modified by one system. To examine the impact of de-identification on clinical information extraction, we compared counts of SNOMED-CT concepts found by an open source information extraction application in the original (i.e., not de-identified) version of a corpus of VHA clinical notes, and in the same corpus after de-identification. Only about 1.2-3% less SNOMED-CT concepts were found in de-identified versions of our corpus, and many of these concepts were PHI that was erroneously identified as clinical information. To study this impact in more details and assess how generalizable our findings were, we examined the overlap between

  11. Automated in vivo identification of fungal infection on human scalp using optical coherence tomography and machine learning

    NASA Astrophysics Data System (ADS)

    Dubey, Kavita; Srivastava, Vishal; Singh Mehta, Dalip

    2018-04-01

    Early identification of fungal infection on the human scalp is crucial for avoiding hair loss. The diagnosis of fungal infection on the human scalp is based on a visual assessment by trained experts or doctors. Optical coherence tomography (OCT) has the ability to capture fungal infection information from the human scalp with a high resolution. In this study, we present a fully automated, non-contact, non-invasive optical method for rapid detection of fungal infections based on the extracted features from A-line and B-scan images of OCT. A multilevel ensemble machine model is designed to perform automated classification, which shows the superiority of our classifier to the best classifier based on the features extracted from OCT images. In this study, 60 samples (30 fungal, 30 normal) were imaged by OCT and eight features were extracted. The classification algorithm had an average sensitivity, specificity and accuracy of 92.30, 90.90 and 91.66%, respectively, for identifying fungal and normal human scalps. This remarkable classifying ability makes the proposed model readily applicable to classifying the human scalp.

  12. 21 CFR 864.5200 - Automated cell counter.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...

  13. 21 CFR 864.5200 - Automated cell counter.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...

  14. 21 CFR 864.5200 - Automated cell counter.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...

  15. 21 CFR 864.5200 - Automated cell counter.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ....5200 Automated cell counter. (a) Identification. An automated cell counter is a fully-automated or semi-automated device used to count red blood cells, white blood cells, or blood platelets using a sample of the patient's peripheral blood (blood circulating in one of the body's extremities, such as the arm). These...

  16. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  17. Methods for Automated Identification of Informative Behaviors in Natural Bioptic Driving

    PubMed Central

    Luo, Gang; Peli, Eli

    2012-01-01

    Visually impaired people may legally drive if wearing bioptic telescopes in some developed countries. To address the controversial safety issue of the practice, we have developed a low cost in-car recording system that can be installed in study participants’ own vehicles to record their daily driving activities. We also developed a set of automated identification techniques of informative behaviors to facilitate efficient manual review of important segments submerged in the vast amount of uncontrolled data. Here we present the methods and quantitative results of the detection performance for six types of driving maneuvers and behaviors that are important for bioptic driving: bioptic telescope use, turns, curves, intersections, weaving, and rapid stops. The testing data were collected from one normally sighted and two visually impaired subjects across multiple days. The detection rates ranged from 82% up to 100%, and the false discovery rates ranged from 0% to 13%. In addition, two human observers were able to interpret about 80% of targets viewed through the telescope. These results indicate that with appropriate data processing the low-cost system is able to provide reliable data for natural bioptic driving studies. PMID:22514200

  18. Time frequency analysis for automated sleep stage identification in fullterm and preterm neonates.

    PubMed

    Fraiwan, Luay; Lweesy, Khaldon; Khasawneh, Natheer; Fraiwan, Mohammad; Wenz, Heinrich; Dickhaus, Hartmut

    2011-08-01

    This work presents a new methodology for automated sleep stage identification in neonates based on the time frequency distribution of single electroencephalogram (EEG) recording and artificial neural networks (ANN). Wigner-Ville distribution (WVD), Hilbert-Hough spectrum (HHS) and continuous wavelet transform (CWT) time frequency distributions were used to represent the EEG signal from which features were extracted using time frequency entropy. The classification of features was done using feed forward back-propagation ANN. The system was trained and tested using data taken from neonates of post-conceptual age of 40 weeks for both preterm (14 recordings) and fullterm (15 recordings). The identification of sleep stages was successfully implemented and the classification based on the WVD outperformed the approaches based on CWT and HHS. The accuracy and kappa coefficient were found to be 0.84 and 0.65 respectively for the fullterm neonates' recordings and 0.74 and 0.50 respectively for preterm neonates' recordings.

  19. Prospective, observational study comparing automated and visual point-of-care urinalysis in general practice.

    PubMed

    van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier

    2016-08-08

    Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce

  20. Automated segmentation of chronic stroke lesions using LINDA: Lesion Identification with Neighborhood Data Analysis

    PubMed Central

    Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian

    2015-01-01

    The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101

  1. Study of living single cells in culture: automated recognition of cell behavior.

    PubMed

    Bodin, P; Papin, S; Meyer, C; Travo, P

    1988-07-01

    An automated system capable of analyzing the behavior, in real time, of single living cells in culture, in a noninvasive and nondestructive way, has been developed. A large number of cell positions in single culture dishes were recorded using a computer controlled, robotized microscope. During subsequent observations, binary images obtained from video image analysis of the microscope visual field allowed the identification of the recorded cells. These cells could be revisited automatically every few minutes. Long-term studies of the behavior of cells make possible the analysis of cellular locomotary and mitotic activities as well as determination of cell shape (chosen from a defined library) for several hours or days in a fully automated way with observations spaced up to 30 minutes. Short-term studies of the behavior of cells permit the study, in a semiautomatic way, of acute effects of drugs (5 to 15 minutes) on changes of surface area and length of cells.

  2. 21 CFR 864.5620 - Automated hemoglobin system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...

  3. 21 CFR 864.5620 - Automated hemoglobin system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...

  4. Clinical Laboratory Automation: A Case Study

    PubMed Central

    Archetti, Claudia; Montanelli, Alessandro; Finazzi, Dario; Caimi, Luigi; Garrafa, Emirena

    2017-01-01

    Background This paper presents a case study of an automated clinical laboratory in a large urban academic teaching hospital in the North of Italy, the Spedali Civili in Brescia, where four laboratories were merged in a unique laboratory through the introduction of laboratory automation. Materials and Methods The analysis compares the preautomation situation and the new setting from a cost perspective, by considering direct and indirect costs. It also presents an analysis of the turnaround time (TAT). The study considers equipment, staff and indirect costs. Results The introduction of automation led to a slight increase in equipment costs which is highly compensated by a remarkable decrease in staff costs. Consequently, total costs decreased by 12.55%. The analysis of the TAT shows an improvement of nonemergency exams while emergency exams are still validated within the maximum time imposed by the hospital. Conclusions The strategy adopted by the management, which was based on re-using the available equipment and staff when merging the pre-existing laboratories, has reached its goal: introducing automation while minimizing the costs. Significance for public health Automation is an emerging trend in modern clinical laboratories with a positive impact on service level to patients and on staff safety as shown by different studies. In fact, it allows process standardization which, in turn, decreases the frequency of outliers and errors. In addition, it induces faster processing times, thus improving the service level. On the other side, automation decreases the staff exposition to accidents strongly improving staff safety. In this study, we analyse a further potential benefit of automation, that is economic convenience. We study the case of the automated laboratory of one of the biggest hospital in Italy and compare the cost related to the pre and post automation situation. Introducing automation lead to a cost decrease without affecting the service level to patients

  5. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  6. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...

  7. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood cell diluting apparatus. 864.5240... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting apparatus is a fully automated or semi-automated device used to make appropriate dilutions of a blood sample...

  8. Automated Fast Screening Method for Cocaine Identification in Seized Drug Samples Using a Portable Fourier Transform Infrared (FT-IR) Instrument.

    PubMed

    Mainali, Dipak; Seelenbinder, John

    2016-05-01

    Quick and presumptive identification of seized drug samples without destroying evidence is necessary for law enforcement officials to control the trafficking and abuse of drugs. This work reports an automated screening method to detect the presence of cocaine in seized samples using portable Fourier transform infrared (FT-IR) spectrometers. The method is based on the identification of well-defined characteristic vibrational frequencies related to the functional group of the cocaine molecule and is fully automated through the use of an expert system. Traditionally, analysts look for key functional group bands in the infrared spectra and characterization of the molecules present is dependent on user interpretation. This implies the need for user expertise, especially in samples that likely are mixtures. As such, this approach is biased and also not suitable for non-experts. The method proposed in this work uses the well-established "center of gravity" peak picking mathematical algorithm and combines it with the conditional reporting feature in MicroLab software to provide an automated method that can be successfully employed by users with varied experience levels. The method reports the confidence level of cocaine present only when a certain number of cocaine related peaks are identified by the automated method. Unlike library search and chemometric methods that are dependent on the library database or the training set samples used to build the calibration model, the proposed method is relatively independent of adulterants and diluents present in the seized mixture. This automated method in combination with a portable FT-IR spectrometer provides law enforcement officials, criminal investigators, or forensic experts a quick field-based prescreening capability for the presence of cocaine in seized drug samples. © The Author(s) 2016.

  9. Intelligent Systems Approach for Automated Identification of Individual Control Behavior of a Human Operator

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator.

  10. Emerging Microtechnologies and Automated Systems for Rapid Bacterial Identification and Antibiotic Susceptibility Testing

    PubMed Central

    Li, Yiyan; Yang, Xing; Zhao, Weian

    2018-01-01

    Rapid bacterial identification (ID) and antibiotic susceptibility testing (AST) are in great demand due to the rise of drug-resistant bacteria. Conventional culture-based AST methods suffer from a long turnaround time. By necessity, physicians often have to treat patients empirically with antibiotics, which has led to an inappropriate use of antibiotics, an elevated mortality rate and healthcare costs, and antibiotic resistance. Recent advances in miniaturization and automation provide promising solutions for rapid bacterial ID/AST profiling, which will potentially make a significant impact in the clinical management of infectious diseases and antibiotic stewardship in the coming years. In this review, we summarize and analyze representative emerging micro- and nanotechnologies, as well as automated systems for bacterial ID/AST, including both phenotypic (e.g., microfluidic-based bacterial culture, and digital imaging of single cells) and molecular (e.g., multiplex PCR, hybridization probes, nanoparticles, synthetic biology tools, mass spectrometry, and sequencing technologies) methods. We also discuss representative point-of-care (POC) systems that integrate sample processing, fluid handling, and detection for rapid bacterial ID/AST. Finally, we highlight major remaining challenges and discuss potential future endeavors toward improving clinical outcomes with rapid bacterial ID/AST technologies. PMID:28850804

  11. Emerging Microtechnologies and Automated Systems for Rapid Bacterial Identification and Antibiotic Susceptibility Testing.

    PubMed

    Li, Yiyan; Yang, Xing; Zhao, Weian

    2017-12-01

    Rapid bacterial identification (ID) and antibiotic susceptibility testing (AST) are in great demand due to the rise of drug-resistant bacteria. Conventional culture-based AST methods suffer from a long turnaround time. By necessity, physicians often have to treat patients empirically with antibiotics, which has led to an inappropriate use of antibiotics, an elevated mortality rate and healthcare costs, and antibiotic resistance. Recent advances in miniaturization and automation provide promising solutions for rapid bacterial ID/AST profiling, which will potentially make a significant impact in the clinical management of infectious diseases and antibiotic stewardship in the coming years. In this review, we summarize and analyze representative emerging micro- and nanotechnologies, as well as automated systems for bacterial ID/AST, including both phenotypic (e.g., microfluidic-based bacterial culture, and digital imaging of single cells) and molecular (e.g., multiplex PCR, hybridization probes, nanoparticles, synthetic biology tools, mass spectrometry, and sequencing technologies) methods. We also discuss representative point-of-care (POC) systems that integrate sample processing, fluid handling, and detection for rapid bacterial ID/AST. Finally, we highlight major remaining challenges and discuss potential future endeavors toward improving clinical outcomes with rapid bacterial ID/AST technologies.

  12. 21 CFR 864.5700 - Automated platelet aggregation system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  13. 21 CFR 864.5700 - Automated platelet aggregation system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  14. 21 CFR 864.5700 - Automated platelet aggregation system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  15. 21 CFR 864.5700 - Automated platelet aggregation system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  16. 21 CFR 864.5700 - Automated platelet aggregation system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  17. 21 CFR 864.5220 - Automated differential cell counter.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated differential cell counter. 864.5220... § 864.5220 Automated differential cell counter. (a) Identification. An automated differential cell... have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood...

  18. 21 CFR 864.5220 - Automated differential cell counter.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated differential cell counter. 864.5220... § 864.5220 Automated differential cell counter. (a) Identification. An automated differential cell... have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood...

  19. 21 CFR 864.5220 - Automated differential cell counter.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated differential cell counter. 864.5220... § 864.5220 Automated differential cell counter. (a) Identification. An automated differential cell... have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood...

  20. 21 CFR 864.5220 - Automated differential cell counter.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated differential cell counter. 864.5220... § 864.5220 Automated differential cell counter. (a) Identification. An automated differential cell... have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood...

  1. 21 CFR 864.5220 - Automated differential cell counter.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated differential cell counter. 864.5220... § 864.5220 Automated differential cell counter. (a) Identification. An automated differential cell... have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood...

  2. High-Throughput Analysis and Automation for Glycomics Studies.

    PubMed

    Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.

  3. 21 CFR 864.9245 - Automated blood cell separator.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated blood cell separator. 864.9245 Section... Blood and Blood Products § 864.9245 Automated blood cell separator. (a) Identification. An automated blood cell separator is a device that uses a centrifugal or filtration separation principle to...

  4. 21 CFR 864.9245 - Automated blood cell separator.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated blood cell separator. 864.9245 Section... Blood and Blood Products § 864.9245 Automated blood cell separator. (a) Identification. An automated blood cell separator is a device that uses a centrifugal or filtration separation principle to...

  5. 21 CFR 864.9245 - Automated blood cell separator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood cell separator. 864.9245 Section... Blood and Blood Products § 864.9245 Automated blood cell separator. (a) Identification. An automated blood cell separator is a device that uses a centrifugal or filtration separation principle to...

  6. 21 CFR 864.9245 - Automated blood cell separator.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated blood cell separator. 864.9245 Section... Blood and Blood Products § 864.9245 Automated blood cell separator. (a) Identification. An automated blood cell separator is a device that uses a centrifugal or filtration separation principle to...

  7. 21 CFR 864.9245 - Automated blood cell separator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood cell separator. 864.9245 Section... Blood and Blood Products § 864.9245 Automated blood cell separator. (a) Identification. An automated blood cell separator is a device that uses a centrifugal or filtration separation principle to...

  8. Automated fault-management in a simulated spaceflight micro-world

    NASA Technical Reports Server (NTRS)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  9. 21 CFR 864.5260 - Automated cell-locating device.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated cell-locating device. 864.5260 Section... § 864.5260 Automated cell-locating device. (a) Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and...

  10. 21 CFR 864.5260 - Automated cell-locating device.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated cell-locating device. 864.5260 Section... § 864.5260 Automated cell-locating device. (a) Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and...

  11. 21 CFR 864.5260 - Automated cell-locating device.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated cell-locating device. 864.5260 Section... § 864.5260 Automated cell-locating device. (a) Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and...

  12. 21 CFR 864.5260 - Automated cell-locating device.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated cell-locating device. 864.5260 Section... § 864.5260 Automated cell-locating device. (a) Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and...

  13. 21 CFR 864.5260 - Automated cell-locating device.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated cell-locating device. 864.5260 Section... § 864.5260 Automated cell-locating device. (a) Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and...

  14. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  15. 21 CFR 864.5600 - Automated hematocrit instrument.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... measures the packed red cell volume of a blood sample to distinguish normal from abnormal states, such as anemia and erythrocytosis (an increase in the number of red cells). (b) Classification. Class II... § 864.5600 Automated hematocrit instrument. (a) Identification. An automated hematocrit instrument is a...

  16. 21 CFR 864.5600 - Automated hematocrit instrument.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... measures the packed red cell volume of a blood sample to distinguish normal from abnormal states, such as anemia and erythrocytosis (an increase in the number of red cells). (b) Classification. Class II... § 864.5600 Automated hematocrit instrument. (a) Identification. An automated hematocrit instrument is a...

  17. 21 CFR 864.5600 - Automated hematocrit instrument.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... measures the packed red cell volume of a blood sample to distinguish normal from abnormal states, such as anemia and erythrocytosis (an increase in the number of red cells). (b) Classification. Class II... § 864.5600 Automated hematocrit instrument. (a) Identification. An automated hematocrit instrument is a...

  18. 21 CFR 864.5600 - Automated hematocrit instrument.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... measures the packed red cell volume of a blood sample to distinguish normal from abnormal states, such as anemia and erythrocytosis (an increase in the number of red cells). (b) Classification. Class II... § 864.5600 Automated hematocrit instrument. (a) Identification. An automated hematocrit instrument is a...

  19. 21 CFR 864.5600 - Automated hematocrit instrument.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... measures the packed red cell volume of a blood sample to distinguish normal from abnormal states, such as anemia and erythrocytosis (an increase in the number of red cells). (b) Classification. Class II... § 864.5600 Automated hematocrit instrument. (a) Identification. An automated hematocrit instrument is a...

  20. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated blood cell diluting apparatus. 864.5240 Section 864.5240 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting...

  1. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated blood cell diluting apparatus. 864.5240 Section 864.5240 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting...

  2. 21 CFR 864.5240 - Automated blood cell diluting apparatus.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated blood cell diluting apparatus. 864.5240 Section 864.5240 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 864.5240 Automated blood cell diluting apparatus. (a) Identification. An automated blood cell diluting...

  3. Automated Identification of Volcanic Plumes using the Ozone Monitoring Instrument (OMI)

    NASA Astrophysics Data System (ADS)

    Flower, V. J. B.; Oommen, T.; Carn, S. A.

    2015-12-01

    Volcanic eruptions are a global phenomenon which are increasingly impacting human populations due to factors such as the extension of population centres into areas of higher risk, expansion of agricultural sectors to accommodate increased production or the increasing impact of volcanic plumes on air travel. In areas where extensive monitoring is present these impacts can be moderated by ground based monitoring and alert systems, however many volcanoes have little or no monitoring capabilities. In many of these regions volcanic alerts are generated by local communities with limited resources or formal communication systems, however additional eruption alerts can result from chance encounters with passing aircraft. In contrast satellite based remote sensing instruments possess the capability to provide near global daily monitoring, facilitating automated volcanic eruption detection. One such system generates eruption alerts through the detection of thermal anomalies, known as MODVOLC, and is currently operational utilising moderate resolution MODIS satellite data. Within this work we outline a method to distinguish SO2 eruptions from background levels recorded by the Ozone Monitoring Instrument (OMI) through the identification and classification of volcanic activity over a 5 year period. The incorporation of this data into a logistic regression model facilitated the classification of volcanic events with an overall accuracy of 80% whilst consistently identifying plumes with a mass of 400 tons or higher. The implementation of the developed model could facilitate the near real time identification of new and ongoing volcanic activity on a global scale.

  4. Automated identification and predictive tools to help identify high-risk heart failure patients: pilot evaluation.

    PubMed

    Evans, R Scott; Benuzillo, Jose; Horne, Benjamin D; Lloyd, James F; Bradshaw, Alejandra; Budge, Deborah; Rasmusson, Kismet D; Roberts, Colleen; Buckway, Jason; Geer, Norma; Garrett, Teresa; Lappé, Donald L

    2016-09-01

    Develop and evaluate an automated identification and predictive risk report for hospitalized heart failure (HF) patients. Dictated free-text reports from the previous 24 h were analyzed each day with natural language processing (NLP), to help improve the early identification of hospitalized patients with HF. A second application that uses an Intermountain Healthcare-developed predictive score to determine each HF patient's risk for 30-day hospital readmission and 30-day mortality was also developed. That information was included in an identification and predictive risk report, which was evaluated at a 354-bed hospital that treats high-risk HF patients. The addition of NLP-identified HF patients increased the identification score's sensitivity from 82.6% to 95.3% and its specificity from 82.7% to 97.5%, and the model's positive predictive value is 97.45%. Daily multidisciplinary discharge planning meetings are now based on the information provided by the HF identification and predictive report, and clinician's review of potential HF admissions takes less time compared to the previously used manual methodology (10 vs 40 min). An evaluation of the use of the HF predictive report identified a significant reduction in 30-day mortality and a significant increase in patient discharges to home care instead of to a specialized nursing facility. Using clinical decision support to help identify HF patients and automatically calculating their 30-day all-cause readmission and 30-day mortality risks, coupled with a multidisciplinary care process pathway, was found to be an effective process to improve HF patient identification, significantly reduce 30-day mortality, and significantly increase patient discharges to home care. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Input-output identification of controlled discrete manufacturing systems

    NASA Astrophysics Data System (ADS)

    Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques

    2014-03-01

    The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.

  6. Evaluation of Automated Yeast Identification System

    NASA Technical Reports Server (NTRS)

    McGinnis, M. R.

    1996-01-01

    One hundred and nine teleomorphic and anamorphic yeast isolates representing approximately 30 taxa were used to evaluate the accuracy of the Biolog yeast identification system. Isolates derived from nomenclatural types, environmental, and clinica isolates of known identity were tested in the Biolog system. Of the isolates tested, 81 were in the Biolog database. The system correctly identified 40, incorrectly identified 29, and was unable to identify 12. Of the 28 isolates not in the database, 18 were given names, whereas 10 were not. The Biolog yeast identification system is inadequate for the identification of yeasts originating from the environment during space program activities.

  7. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease.

    PubMed

    Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk

    2016-06-01

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.

  8. Automated podosome identification and characterization in fluorescence microscopy images.

    PubMed

    Meddens, Marjolein B M; Rieger, Bernd; Figdor, Carl G; Cambi, Alessandra; van den Dries, Koen

    2013-02-01

    Podosomes are cellular adhesion structures involved in matrix degradation and invasion that comprise an actin core and a ring of cytoskeletal adaptor proteins. They are most often identified by staining with phalloidin, which binds F-actin and therefore visualizes the core. However, not only podosomes, but also many other cytoskeletal structures contain actin, which makes podosome segmentation by automated image processing difficult. Here, we have developed a quantitative image analysis algorithm that is optimized to identify podosome cores within a typical sample stained with phalloidin. By sequential local and global thresholding, our analysis identifies up to 76% of podosome cores excluding other F-actin-based structures. Based on the overlap in podosome identifications and quantification of podosome numbers, our algorithm performs equally well compared to three experts. Using our algorithm we show effects of actin polymerization and myosin II inhibition on the actin intensity in both podosome core and associated actin network. Furthermore, by expanding the core segmentations, we reveal a previously unappreciated differential distribution of cytoskeletal adaptor proteins within the podosome ring. These applications illustrate that our algorithm is a valuable tool for rapid and accurate large-scale analysis of podosomes to increase our understanding of these characteristic adhesion structures.

  9. Space station automation study-satellite servicing, volume 2

    NASA Technical Reports Server (NTRS)

    Meissinger, H. F.

    1984-01-01

    Technology requirements for automated satellite servicing operations aboard the NASA space station were studied. The three major tasks addressed: (1) servicing requirements (satellite and space station elements) and the role of automation; (2) assessment of automation technology; and (3) conceptual design of servicing facilities on the space station. It is found that many servicing functions cloud benefit from automation support; and the certain research and development activities on automation technologies for servicing should start as soon as possible. Also, some advanced automation developments for orbital servicing could be effectively applied to U.S. industrial ground based operations.

  10. Architecture Views Illustrating the Service Automation Aspect of SOA

    NASA Astrophysics Data System (ADS)

    Gu, Qing; Cuadrado, Félix; Lago, Patricia; Duenãs, Juan C.

    Earlier in this book, Chapter 8 provided a detailed analysis of service engineering, including a review of service engineering techniques and methodologies. This chapter is closely related to Chapter 8 as shows how such approaches can be used to develop a service, with particular emphasis on the identification of three views (the automation decision view, degree of service automation view and service automation related data view) that structure and ease elicitation and documentation of stakeholders' concerns. This is carried out through two large case studies to learn the industrial needs in illustrating services deployment and configuration automation. This set of views adds to the more traditional notations like UML, the visual power of attracting the attention of their users to the addressed concerns, and assist them in their work. This is especially crucial in service oriented architecting where service automation is highly demanded.

  11. Automated identification of insect vectors of Chagas disease in Brazil and Mexico: the Virtual Vector Lab

    PubMed Central

    Gurgel-Gonçalves, Rodrigo; Komp, Ed; Campbell, Lindsay P.; Khalighifar, Ali; Mellenbruch, Jarrett; Mendonça, Vagner José; Owens, Hannah L.; de la Cruz Felix, Keynes; Ramsey, Janine M.

    2017-01-01

    Identification of arthropods important in disease transmission is a crucial, yet difficult, task that can demand considerable training and experience. An important case in point is that of the 150+ species of Triatominae, vectors of Trypanosoma cruzi, causative agent of Chagas disease across the Americas. We present a fully automated system that is able to identify triatomine bugs from Mexico and Brazil with an accuracy consistently above 80%, and with considerable potential for further improvement. The system processes digital photographs from a photo apparatus into landmarks, and uses ratios of measurements among those landmarks, as well as (in a preliminary exploration) two measurements that approximate aspects of coloration, as the basis for classification. This project has thus produced a working prototype that achieves reasonably robust correct identification rates, although many more developments can and will be added, and—more broadly—the project illustrates the value of multidisciplinary collaborations in resolving difficult and complex challenges. PMID:28439451

  12. Automated identification of insect vectors of Chagas disease in Brazil and Mexico: the Virtual Vector Lab.

    PubMed

    Gurgel-Gonçalves, Rodrigo; Komp, Ed; Campbell, Lindsay P; Khalighifar, Ali; Mellenbruch, Jarrett; Mendonça, Vagner José; Owens, Hannah L; de la Cruz Felix, Keynes; Peterson, A Townsend; Ramsey, Janine M

    2017-01-01

    Identification of arthropods important in disease transmission is a crucial, yet difficult, task that can demand considerable training and experience. An important case in point is that of the 150+ species of Triatominae, vectors of Trypanosoma cruzi , causative agent of Chagas disease across the Americas. We present a fully automated system that is able to identify triatomine bugs from Mexico and Brazil with an accuracy consistently above 80%, and with considerable potential for further improvement. The system processes digital photographs from a photo apparatus into landmarks, and uses ratios of measurements among those landmarks, as well as (in a preliminary exploration) two measurements that approximate aspects of coloration, as the basis for classification. This project has thus produced a working prototype that achieves reasonably robust correct identification rates, although many more developments can and will be added, and-more broadly-the project illustrates the value of multidisciplinary collaborations in resolving difficult and complex challenges.

  13. Demonstration of the feasibility of automated silicon solar cell fabrication

    NASA Technical Reports Server (NTRS)

    Taylor, W. E.; Schwartz, F. M.

    1975-01-01

    A study effort was undertaken to determine the process, steps and design requirements of an automated silicon solar cell production facility. Identification of the key process steps was made and a laboratory model was conceptually designed to demonstrate the feasibility of automating the silicon solar cell fabrication process. A detailed laboratory model was designed to demonstrate those functions most critical to the question of solar cell fabrication process automating feasibility. The study and conceptual design have established the technical feasibility of automating the solar cell manufacturing process to produce low cost solar cells with improved performance. Estimates predict an automated process throughput of 21,973 kilograms of silicon a year on a three shift 49-week basis, producing 4,747,000 hexagonal cells (38mm/side), a total of 3,373 kilowatts at an estimated manufacturing cost of $0.866 per cell or $1.22 per watt.

  14. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  15. Automated identification of cone photoreceptors in adaptive optics retinal images.

    PubMed

    Li, Kaccie Y; Roorda, Austin

    2007-05-01

    In making noninvasive measurements of the human cone mosaic, the task of labeling each individual cone is unavoidable. Manual labeling is a time-consuming process, setting the motivation for the development of an automated method. An automated algorithm for labeling cones in adaptive optics (AO) retinal images is implemented and tested on real data. The optical fiber properties of cones aided the design of the algorithm. Out of 2153 manually labeled cones from six different images, the automated method correctly identified 94.1% of them. The agreement between the automated and the manual labeling methods varied from 92.7% to 96.2% across the six images. Results between the two methods disagreed for 1.2% to 9.1% of the cones. Voronoi analysis of large montages of AO retinal images confirmed the general hexagonal-packing structure of retinal cones as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the reliability and practicality of having an automated solution to this problem.

  16. Clinical Laboratory Automation: A Case Study.

    PubMed

    Archetti, Claudia; Montanelli, Alessandro; Finazzi, Dario; Caimi, Luigi; Garrafa, Emirena

    2017-04-13

    This paper presents a case study of an automated clinical laboratory in a large urban academic teaching hospital in the North of Italy, the Spedali Civili in Brescia, where four laboratories were merged in a unique laboratory through the introduction of laboratory automation. The analysis compares the preautomation situation and the new setting from a cost perspective, by considering direct and indirect costs. It also presents an analysis of the turnaround time (TAT). The study considers equipment, staff and indirect costs. The introduction of automation led to a slight increase in equipment costs which is highly compensated by a remarkable decrease in staff costs. Consequently, total costs decreased by 12.55%. The analysis of the TAT shows an improvement of nonemergency exams while emergency exams are still validated within the maximum time imposed by the hospital. The strategy adopted by the management, which was based on re-using the available equipment and staff when merging the pre-existing laboratories, has reached its goal: introducing automation while minimizing the costs.

  17. Automated Identification and Shape Analysis of Chorus Elements in the Van Allen Radiation Belts

    NASA Astrophysics Data System (ADS)

    Sen Gupta, Ananya; Kletzing, Craig; Howk, Robin; Kurth, William; Matheny, Morgan

    2017-12-01

    An important goal of the Van Allen Probes mission is to understand wave-particle interaction by chorus emissions in terrestrial Van Allen radiation belts. To test models, statistical characterization of chorus properties, such as amplitude variation and sweep rates, is an important scientific goal. The Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite provides measurements of wave electric and magnetic fields as well as DC magnetic fields for the Van Allen Probes mission. However, manual inspection across terabytes of EMFISIS data is not feasible and as such introduces human confirmation bias. We present signal processing techniques for automated identification, shape analysis, and sweep rate characterization of high-amplitude whistler-mode chorus elements in the Van Allen radiation belts. Specifically, we develop signal processing techniques based on the radon transform that disambiguate chorus elements with a dominant sweep rate against hiss-like chorus. We present representative results validating our techniques and also provide statistical characterization of detected chorus elements across a case study of a 6 s epoch.

  18. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  19. 21 CFR 864.9285 - Automated cell-washing centrifuge for immuno-hematology.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated cell-washing centrifuge for immuno... Establishments That Manufacture Blood and Blood Products § 864.9285 Automated cell-washing centrifuge for immuno-hematology. (a) Identification. An automated cell-washing centrifuge for immuno-hematology is a device used...

  20. 21 CFR 864.9285 - Automated cell-washing centrifuge for immuno-hematology.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated cell-washing centrifuge for immuno... Establishments That Manufacture Blood and Blood Products § 864.9285 Automated cell-washing centrifuge for immuno-hematology. (a) Identification. An automated cell-washing centrifuge for immuno-hematology is a device used...

  1. 21 CFR 864.9285 - Automated cell-washing centrifuge for immuno-hematology.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated cell-washing centrifuge for immuno... Establishments That Manufacture Blood and Blood Products § 864.9285 Automated cell-washing centrifuge for immuno-hematology. (a) Identification. An automated cell-washing centrifuge for immuno-hematology is a device used...

  2. 21 CFR 864.9285 - Automated cell-washing centrifuge for immuno-hematology.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated cell-washing centrifuge for immuno... Establishments That Manufacture Blood and Blood Products § 864.9285 Automated cell-washing centrifuge for immuno-hematology. (a) Identification. An automated cell-washing centrifuge for immuno-hematology is a device used...

  3. 21 CFR 864.9285 - Automated cell-washing centrifuge for immuno-hematology.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated cell-washing centrifuge for immuno... Establishments That Manufacture Blood and Blood Products § 864.9285 Automated cell-washing centrifuge for immuno-hematology. (a) Identification. An automated cell-washing centrifuge for immuno-hematology is a device used...

  4. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  5. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  6. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  7. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  8. 21 CFR 864.9300 - Automated Coombs test systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Blood and Blood Products § 864.9300 Automated Coombs test systems. (a) Identification. An automated Coombs test system is a device used to detect and identify antibodies in patient sera or antibodies bound to red cells. The Coombs test is used for the diagnosis of hemolytic disease of the newborn, and...

  9. Feasibility Study for an Automated Library System. Final Report.

    ERIC Educational Resources Information Center

    Beaumont and Associates, Inc.

    This study was initiated by the Newfoundland Public Library Services (NPLS) to assess the feasibility of automation for the library services and to determine the viability of an integrated automated library system for the NPLS. The study addresses the needs of NPLS in terms of library automation; benefits to be achieved through the introduction of…

  10. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automated blood grouping and antibody test system...

  11. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood grouping and antibody test system...

  12. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automated blood grouping and antibody test system...

  13. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automated blood grouping and antibody test system...

  14. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Manufacture Blood and Blood Products § 864.9175 Automated blood grouping and antibody test system. (a) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automated blood grouping and antibody test system...

  15. [Recent trends in the standardization of laboratory automation].

    PubMed

    Tao, R; Yamashita, K

    2000-10-01

    Laboratory automation systems have been introduced to many clinical laboratories since early 1990s. Meanwhile, it was found that the difference in the specimen tube dimensions, specimen identification formats, specimen carrier transportation equipment architecture, electromechanical interfaces between the analyzers and the automation systems was preventing the systems from being introduced to a wider extent. To standardize the different interfaces and reduce the cost necessary for the laboratory automation, NCCLS and JCCLS started establishing standards for the laboratory automation in 1996 and 1997 respectively. NCCLS has published five proposed standards which that are expected to be approved by the end of 2000.

  16. NREL Study Predicts Fuel and Emissions Impact of Automated Mobility

    Science.gov Websites

    District | News | NREL Study Predicts Fuel and Emissions Impact of Automated Mobility District NREL Study Predicts Fuel and Emissions Impact of Automated Mobility District January 21, 2016 With NREL study shows that a campus-sized -- ranging from four to 10 square miles -- automated mobility

  17. 21 CFR 866.1645 - Fully automated short-term incubation cycle antimicrobial susceptibility system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fully automated short-term incubation cycle... Diagnostic Devices § 866.1645 Fully automated short-term incubation cycle antimicrobial susceptibility system. (a) Identification. A fully automated short-term incubation cycle antimicrobial susceptibility system...

  18. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

  19. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 9: Functional requirements

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The current system and subsystem used by the Identification Division are described. System constraints that dictate the system environment are discussed and boundaries within which solutions must be found are described. The functional requirements were related to the performance requirements. These performance requirements were then related to their applicable subsystems. The flow of data, documents, or other pieces of information from one subsystem to another or from the external world into the identification system is described. Requirements and design standards for a computer based system are presented.

  20. Automated Proton Track Identification in MicroBooNE Using Gradient Boosted Decision Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodruff, Katherine

    MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino experiment that is currently running in the Booster Neutrino Beam at Fermilab. LArTPC technology allows for high-resolution, three-dimensional representations of neutrino interactions. A wide variety of software tools for automated reconstruction and selection of particle tracks in LArTPCs are actively being developed. Short, isolated proton tracks, the signal for low- momentum-transfer neutral current (NC) elastic events, are easily hidden in a large cosmic background. Detecting these low-energy tracks will allow us to probe interesting regions of the proton's spin structure. An effective method for selecting NC elastic events is tomore » combine a highly efficient track reconstruction algorithm to find all candidate tracks with highly accurate particle identification using a machine learning algorithm. We present our work on particle track classification using gradient tree boosting software (XGBoost) and the performance on simulated neutrino data.« less

  1. Feasibility of using a large Clinical Data Warehouse to automate the selection of diagnostic cohorts.

    PubMed

    Stephen, Reejis; Boxwala, Aziz; Gertman, Paul

    2003-01-01

    Data from Clinical Data Warehouses (CDWs) can be used for retrospective studies and for benchmarking. However, automated identification of cases from large datasets containing data items in free text fields is challenging. We developed an algorithm for categorizing pediatric patients presenting with respiratory distress into Bronchiolitis, Bacterial pneumonia and Asthma using clinical variables from a CDW. A feasibility study of this approach indicates that case selection may be automated.

  2. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  3. Improving patient safety via automated laboratory-based adverse event grading.

    PubMed

    Niland, Joyce C; Stiller, Tracey; Neat, Jennifer; Londrc, Adina; Johnson, Dina; Pannoni, Susan

    2012-01-01

    The identification and grading of adverse events (AEs) during the conduct of clinical trials is a labor-intensive and error-prone process. This paper describes and evaluates a software tool developed by City of Hope to automate complex algorithms to assess laboratory results and identify and grade AEs. We compared AEs identified by the automated system with those previously assessed manually, to evaluate missed/misgraded AEs. We also conducted a prospective paired time assessment of automated versus manual AE assessment. We found a substantial improvement in accuracy/completeness with the automated grading tool, which identified an additional 17% of severe grade 3-4 AEs that had been missed/misgraded manually. The automated system also provided an average time saving of 5.5 min per treatment course. With 400 ongoing treatment trials at City of Hope and an average of 1800 laboratory results requiring assessment per study, the implications of these findings for patient safety are enormous.

  4. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  5. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 5: Current system evaluation

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The performance, costs, organization and other characteristics of both the manual system and AIDS 2 were used to establish a baseline case. The results of the evaluation are to be used to determine the feasibility of the AIDS 3 System, as well as provide a basis for ranking alternative systems during the second phase of the JPL study. The results of the study were tabulated by subject, scope and methods, providing a descriptive, quantitative and qualitative analysis of the current operating systems employed by the FBI Identification Division.

  6. NMR-based automated protein structure determination.

    PubMed

    Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter

    2017-08-15

    NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Automating lexical cross-mapping of ICNP to SNOMED CT.

    PubMed

    Kim, Tae Youn

    2016-01-01

    The purpose of this study was to examine the feasibility of automating lexical cross-mapping of a logic-based nursing terminology (ICNP) to SNOMED CT using the Unified Medical Language System (UMLS) maintained by the U.S. National Library of Medicine. A two-stage approach included patterns identification, and application and evaluation of an automated term matching procedure. The performance of the automated procedure was evaluated using a test set against a gold standard (i.e. concept equivalency table) created independently by terminology experts. There were lexical similarities between ICNP diagnostic concepts and SNOMED CT. The automated term matching procedure was reliable as presented in recall of 65%, precision of 79%, accuracy of 82%, F-measure of 0.71 and the area under the receiver operating characteristics (ROC) curve of 0.78 (95% CI 0.73-0.83). When the automated procedure was not able to retrieve lexically matched concepts, it was also unlikely for terminology experts to identify a matched SNOMED CT concept. Although further research is warranted to enhance the automated matching procedure, the combination of cross-maps from UMLS and the automated procedure is useful to generate candidate mappings and thus, assist ongoing maintenance of mappings which is a significant burden to terminology developers.

  8. The BAARA (Biological AutomAted RAdiotracking) System: A New Approach in Ecological Field Studies

    PubMed Central

    Řeřucha, Šimon; Bartonička, Tomáš; Jedlička, Petr; Čížek, Martin; Hlouša, Ondřej; Lučan, Radek; Horáček, Ivan

    2015-01-01

    Radiotracking is an important and often the only possible method to explore specific habits and the behaviour of animals, but it has proven to be very demanding and time-consuming, especially when frequent positioning of a large group is required. Our aim was to address this issue by making the process partially automated, to mitigate the demands and related costs. This paper presents a novel automated tracking system that consists of a network of automated tracking stations deployed within the target area. Each station reads the signals from telemetry transmitters, estimates the bearing and distance of the tagged animals and records their position. The station is capable of tracking a theoretically unlimited number of transmitters on different frequency channels with the period of 5–15 seconds per single channel. An ordinary transmitter that fits within the supported frequency band might be used with BAARA (Biological AutomAted RAdiotracking); an extra option is the use of a custom-programmable transmitter with configurable operational parameters, such as the precise frequency channel or the transmission parameters. This new approach to a tracking system was tested for its applicability in a series of field and laboratory tests. BAARA has been tested within fieldwork explorations of Rousettus aegyptiacus during field trips to Dakhla oasis in Egypt. The results illustrate the novel perspective which automated radiotracking opens for the study of spatial behaviour, particularly in addressing topics in the domain of population ecology. PMID:25714910

  9. Automated spectral classification and the GAIA project

    NASA Technical Reports Server (NTRS)

    Lasala, Jerry; Kurtz, Michael J.

    1995-01-01

    Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.

  10. LipidMiner: A Software for Automated Identification and Quantification of Lipids from Multiple Liquid Chromatography-Mass Spectrometry Data Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Da; Zhang, Qibin; Gao, Xiaoli

    2014-04-30

    We have developed a tool for automated, high-throughput analysis of LC-MS/MS data files, which greatly simplifies LC-MS based lipidomics analysis. Our results showed that LipidMiner is accurate and comprehensive in identification and quantification of lipid molecular species. In addition, the workflow implemented in LipidMiner is not limited to identification and quantification of lipids. If a suitable metabolite library is implemented in the library matching module, LipidMiner could be reconfigured as a tool for general metabolomics data analysis. It is of note that LipidMiner currently is limited to singly charged ions, although it is adequate for the purpose of lipidomics sincemore » lipids are rarely multiply charged,[14] even for the polyphosphoinositides. LipidMiner also only processes file formats generated from mass spectrometers from Thermo, i.e. the .RAW format. In the future, we are planning to accommodate file formats generated by mass spectrometers from other predominant instrument vendors to make this tool more universal.« less

  11. Machine Learning Approach to Automated Quality Identification of Human Induced Pluripotent Stem Cell Colony Images.

    PubMed

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti

    2016-01-01

    The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient's cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies.

  12. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Imamura, M. S.; Moser, R. L.; Veatch, M.

    1983-01-01

    Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.

  13. WebPrInSeS: automated full-length clone sequence identification and verification using high-throughput sequencing data.

    PubMed

    Massouras, Andreas; Decouttere, Frederik; Hens, Korneel; Deplancke, Bart

    2010-07-01

    High-throughput sequencing (HTS) is revolutionizing our ability to obtain cheap, fast and reliable sequence information. Many experimental approaches are expected to benefit from the incorporation of such sequencing features in their pipeline. Consequently, software tools that facilitate such an incorporation should be of great interest. In this context, we developed WebPrInSeS, a web server tool allowing automated full-length clone sequence identification and verification using HTS data. WebPrInSeS encompasses two separate software applications. The first is WebPrInSeS-C which performs automated sequence verification of user-defined open-reading frame (ORF) clone libraries. The second is WebPrInSeS-E, which identifies positive hits in cDNA or ORF-based library screening experiments such as yeast one- or two-hybrid assays. Both tools perform de novo assembly using HTS data from any of the three major sequencing platforms. Thus, WebPrInSeS provides a highly integrated, cost-effective and efficient way to sequence-verify or identify clones of interest. WebPrInSeS is available at http://webprinses.epfl.ch/ and is open to all users.

  14. WebPrInSeS: automated full-length clone sequence identification and verification using high-throughput sequencing data

    PubMed Central

    Massouras, Andreas; Decouttere, Frederik; Hens, Korneel; Deplancke, Bart

    2010-01-01

    High-throughput sequencing (HTS) is revolutionizing our ability to obtain cheap, fast and reliable sequence information. Many experimental approaches are expected to benefit from the incorporation of such sequencing features in their pipeline. Consequently, software tools that facilitate such an incorporation should be of great interest. In this context, we developed WebPrInSeS, a web server tool allowing automated full-length clone sequence identification and verification using HTS data. WebPrInSeS encompasses two separate software applications. The first is WebPrInSeS-C which performs automated sequence verification of user-defined open-reading frame (ORF) clone libraries. The second is WebPrInSeS-E, which identifies positive hits in cDNA or ORF-based library screening experiments such as yeast one- or two-hybrid assays. Both tools perform de novo assembly using HTS data from any of the three major sequencing platforms. Thus, WebPrInSeS provides a highly integrated, cost-effective and efficient way to sequence-verify or identify clones of interest. WebPrInSeS is available at http://webprinses.epfl.ch/ and is open to all users. PMID:20501601

  15. Using multiclass classification to automate the identification of patient safety incident reports by type and severity.

    PubMed

    Wang, Ying; Coiera, Enrico; Runciman, William; Magrabi, Farah

    2017-06-12

    Approximately 10% of admissions to acute-care hospitals are associated with an adverse event. Analysis of incident reports helps to understand how and why incidents occur and can inform policy and practice for safer care. Unfortunately our capacity to monitor and respond to incident reports in a timely manner is limited by the sheer volumes of data collected. In this study, we aim to evaluate the feasibility of using multiclass classification to automate the identification of patient safety incidents in hospitals. Text based classifiers were applied to identify 10 incident types and 4 severity levels. Using the one-versus-one (OvsO) and one-versus-all (OvsA) ensemble strategies, we evaluated regularized logistic regression, linear support vector machine (SVM) and SVM with a radial-basis function (RBF) kernel. Classifiers were trained and tested with "balanced" datasets (n_ Type  = 2860, n_ SeverityLevel  = 1160) from a state-wide incident reporting system. Testing was also undertaken with imbalanced "stratified" datasets (n_ Type  = 6000, n_ SeverityLevel =5950) from the state-wide system and an independent hospital reporting system. Classifier performance was evaluated using a confusion matrix, as well as F-score, precision and recall. The most effective combination was a OvsO ensemble of binary SVM RBF classifiers with binary count feature extraction. For incident type, classifiers performed well on balanced and stratified datasets (F-score: 78.3, 73.9%), but were worse on independent datasets (68.5%). Reports about falls, medications, pressure injury, aggression and blood products were identified with high recall and precision. "Documentation" was the hardest type to identify. For severity level, F-score for severity assessment code (SAC) 1 (extreme risk) was 87.3 and 64% for SAC4 (low risk) on balanced data. With stratified data, high recall was achieved for SAC1 (82.8-84%) but precision was poor (6.8-11.2%). High risk incidents (SAC2) were confused

  16. Automated identification of complementarity determining regions (CDRs) reveals peculiar characteristics of CDRs and B cell epitopes.

    PubMed

    Ofran, Yanay; Schlessinger, Avner; Rost, Burkhard

    2008-11-01

    Exact identification of complementarity determining regions (CDRs) is crucial for understanding and manipulating antigenic interactions. One way to do this is by marking residues on the antibody that interact with B cell epitopes on the antigen. This, of course, requires identification of B cell epitopes, which could be done by marking residues on the antigen that bind to CDRs, thus requiring identification of CDRs. To circumvent this vicious circle, existing tools for identifying CDRs are based on sequence analysis or general biophysical principles. Often, these tools, which are based on partial data, fail to agree on the boundaries of the CDRs. Herein we present an automated procedure for identifying CDRs and B cell epitopes using consensus structural regions that interact with the antigens in all known antibody-protein complexes. Consequently, we provide the first comprehensive analysis of all CDR-epitope complexes of known three-dimensional structure. The CDRs we identify only partially overlap with the regions suggested by existing methods. We found that the general physicochemical properties of both CDRs and B cell epitopes are rather peculiar. In particular, only four amino acids account for most of the sequence of CDRs, and several types of amino acids almost never appear in them. The secondary structure content and the conservation of B cell epitopes are found to be different than previously thought. These characteristics of CDRs and epitopes may be instrumental in choosing which residues to mutate in experimental search for epitopes. They may also assist in computational design of antibodies and in predicting B cell epitopes.

  17. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    NASA Astrophysics Data System (ADS)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  18. Chattanooga Electric Power Board Case Study Distribution Automation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glass, Jim; Melin, Alexander M.; Starke, Michael R.

    In 2009, the U.S. Department of Energy under the American Recovery and Reinvestment Act (ARRA) awarded a grant to the Chattanooga, Tennessee, Electric Power Board (EPB) as part of the Smart Grid Investment Grant Program. The grant had the objective “to accelerate the transformation of the nation’s electric grid by deploying smart grid technologies.” This funding award enabled EPB to expedite the original smart grid implementation schedule from an estimated 10-12 years to 2.5 years. With this funding, EPB invested heavily in distribution automation technologies including installing over 1,200 automated circuit switches and sensors on 171 circuits. For utilities consideringmore » a commitment to distribution automation, there are underlying questions such as the following: “What is the value?” and “What are the costs?” This case study attempts to answer these questions. The primary benefit of distribution automation is increased reliability or reduced power outage duration and frequency. Power outages directly impact customer economics by interfering with business functions. In the past, this economic driver has been difficult to effectively evaluate. However, as this case study demonstrates, tools and analysis techniques are now available. In this case study, the impact on customer costs associated with power outages before and after the implementation of distribution automation are compared. Two example evaluations are performed to demonstrate the benefits: 1) a savings baseline for customers under normal operations1 and 2) customer savings for a single severe weather event. Cost calculations for customer power outages are performed using the US Department of Energy (DOE) Interruption Cost Estimate (ICE) calculator2. This tool uses standard metrics associated with outages and the customers to calculate cost impact. The analysis shows that EPB customers have seen significant reliability improvements from the implementation of distribution automation

  19. Computerised electronic foetal heart rate monitoring in labour: automated contraction identification.

    PubMed

    Georgieva, A; Payne, S J; Redman, C W G

    2009-12-01

    The foetal heart rate (FHR) response to uterine contractions is crucial to detect foetal distress by electronic FHR monitoring during labour. We are developing a new automated system (OxSys) for decision support in labour, using the Oxford database of intrapartum FHR records. We describe here a novel technique for automated detection of uterus contractions. In addition, we present a comparison of the new method with four other computerised approaches. During training, OxSys achieved sensitivity above 95% and positive predictive value (PPV) of up to 90% for traces of good quality. During testing, OxSys achieved sensitivity = 87% and PPV = 75%. For comparison, a second clinical expert obtained sensitivity = 93% and PPV = 80%, and all other computerised approaches achieved lower values. It was concluded that the proposed method can be employed with confidence in our study on foetal health assessment in labour and future OxSys development.

  20. Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.

    PubMed

    Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie

    2017-01-01

    Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.

  1. Sink detection on tilted terrain for automated identification of glacial cirques

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Robl, Jörg; Lang, Andreas

    2016-04-01

    Glacial cirques are morphologically distinct but complex landforms and represent a vital part of high mountain topography. Their distribution, elevation and relief are expected to hold information on (1) the extent of glacial occupation, (2) the mechanism of glacial cirque erosion, and (3) how glacial in concert with periglacial processes can limit peak altitude and mountain range height. While easily detectably for the expert's eye both in nature and on various representations of topography, their complicated nature makes them a nemesis for computer algorithms. Consequently, manual mapping of glacial cirques is commonplace in many mountain landscapes worldwide, but consistent datasets of cirque distribution and objectively mapped cirques and their morphometrical attributes are lacking. Among the biggest problems for algorithm development are the complexity in shape and the great variability of cirque size. For example, glacial cirques can be rather circular or longitudinal in extent, exist as individual and composite landforms, show prominent topographic depressions or can entirely be filled with water or sediment. For these reasons, attributes like circularity, size, drainage area and topology of landform elements (e.g. a flat floor surrounded by steep walls) have only a limited potential for automated cirque detection. Here we present a novel, geomorphometric method for automated identification of glacial cirques on digital elevation models that exploits their genetic bowl-like shape. First, we differentiate between glacial and fluvial terrain employing an algorithm based on a moving window approach and multi-scale curvature, which is also capable of fitting the analysis window to valley width. We then fit a plane to the valley stretch clipped by the analysis window and rotate the terrain around the center cell until the plane is level. Doing so, we produce sinks of considerable size if the clipped terrain represents a cirque, while no or only very small sinks

  2. False discovery rates in spectral identification.

    PubMed

    Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno

    2012-01-01

    Automated database search engines are one of the fundamental engines of high-throughput proteomics enabling daily identifications of hundreds of thousands of peptides and proteins from tandem mass (MS/MS) spectrometry data. Nevertheless, this automation also makes it humanly impossible to manually validate the vast lists of resulting identifications from such high-throughput searches. This challenge is usually addressed by using a Target-Decoy Approach (TDA) to impose an empirical False Discovery Rate (FDR) at a pre-determined threshold x% with the expectation that at most x% of the returned identifications would be false positives. But despite the fundamental importance of FDR estimates in ensuring the utility of large lists of identifications, there is surprisingly little consensus on exactly how TDA should be applied to minimize the chances of biased FDR estimates. In fact, since less rigorous TDA/FDR estimates tend to result in more identifications (at higher 'true' FDR), there is often little incentive to enforce strict TDA/FDR procedures in studies where the major metric of success is the size of the list of identifications and there are no follow up studies imposing hard cost constraints on the number of reported false positives. Here we address the problem of the accuracy of TDA estimates of empirical FDR. Using MS/MS spectra from samples where we were able to define a factual FDR estimator of 'true' FDR we evaluate several popular variants of the TDA procedure in a variety of database search contexts. We show that the fraction of false identifications can sometimes be over 10× higher than reported and may be unavoidably high for certain types of searches. In addition, we further report that the two-pass search strategy seems the most promising database search strategy. While unavoidably constrained by the particulars of any specific evaluation dataset, our observations support a series of recommendations towards maximizing the number of resulting

  3. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2002-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects (MLOs) that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist

  4. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2001-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist the

  5. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  6. Improving the driver-automation interaction: an approach using automation uncertainty.

    PubMed

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  7. Supervised learning technique for the automated identification of white matter hyperintensities in traumatic brain injury.

    PubMed

    Stone, James R; Wilde, Elisabeth A; Taylor, Brian A; Tate, David F; Levin, Harvey; Bigler, Erin D; Scheibel, Randall S; Newsome, Mary R; Mayer, Andrew R; Abildskov, Tracy; Black, Garrett M; Lennon, Michael J; York, Gerald E; Agarwal, Rajan; DeVillasante, Jorge; Ritter, John L; Walker, Peter B; Ahlers, Stephen T; Tustison, Nicholas J

    2016-01-01

    White matter hyperintensities (WMHs) are foci of abnormal signal intensity in white matter regions seen with magnetic resonance imaging (MRI). WMHs are associated with normal ageing and have shown prognostic value in neurological conditions such as traumatic brain injury (TBI). The impracticality of manually quantifying these lesions limits their clinical utility and motivates the utilization of machine learning techniques for automated segmentation workflows. This study develops a concatenated random forest framework with image features for segmenting WMHs in a TBI cohort. The framework is built upon the Advanced Normalization Tools (ANTs) and ANTsR toolkits. MR (3D FLAIR, T2- and T1-weighted) images from 24 service members and veterans scanned in the Chronic Effects of Neurotrauma Consortium's (CENC) observational study were acquired. Manual annotations were employed for both training and evaluation using a leave-one-out strategy. Performance measures include sensitivity, positive predictive value, [Formula: see text] score and relative volume difference. Final average results were: sensitivity = 0.68 ± 0.38, positive predictive value = 0.51 ± 0.40, [Formula: see text] = 0.52 ± 0.36, relative volume difference = 43 ± 26%. In addition, three lesion size ranges are selected to illustrate the variation in performance with lesion size. Paired with correlative outcome data, supervised learning methods may allow for identification of imaging features predictive of diagnosis and prognosis in individual TBI patients.

  8. Final Progress Report: Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawool-Sullivan, Mohini; Bounds, John Alan; Brumby, Steven P.

    2012-04-30

    This is the final report of the project titled, 'Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes,' PMIS project number LA10-HUMANID-PD03. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). It summarizes work performed over the FY10 time period. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). Human analysts begin analyzing a spectrum based on features in the spectrum - lines and shapes that aremore » present in a given spectrum. The proposed work was to carry out a feasibility study that will pick out all gamma ray peaks and other features such as Compton edges, bremsstrahlung, presence/absence of shielding and presence of neutrons and escape peaks. Ultimately success of this feasibility study will allow us to collectively explain identified features and form a realistic scenario that produced a given spectrum in the future. We wanted to develop and demonstrate machine learning algorithms that will qualitatively enhance the automated identification capabilities of portable radiological sensors that are currently being used in the field.« less

  9. Automated software-guided identification of new buspirone metabolites using capillary LC coupled to ion trap and TOF mass spectrometry.

    PubMed

    Fandiño, Anabel S; Nägele, Edgar; Perkins, Patrick D

    2006-02-01

    The identification and structure elucidation of drug metabolites is one of the main objectives in in vitro ADME studies. Typical modern methodologies involve incubation of the drug with subcellular fractions to simulate metabolism followed by LC-MS/MS or LC-MS(n) analysis and chemometric approaches for the extraction of the metabolites. The objective of this work was the software-guided identification and structure elucidation of major and minor buspirone metabolites using capillary LC as a separation technique and ion trap MS(n) as well as electrospray ionization orthogonal acceleration time-of-flight (ESI oaTOF) mass spectrometry as detection techniques. Buspirone mainly underwent hydroxylation, dihydroxylation and N-oxidation in S9 fractions in the presence of phase I co-factors and the corresponding glucuronides were detected in the presence of phase II co-factors. The use of automated ion trap MS/MS data-dependent acquisition combined with a chemometric tool allowed the detection of five small chromatographic peaks of unexpected metabolites that co-eluted with the larger chromatographic peaks of expected metabolites. Using automatic assignment of ion trap MS/MS fragments as well as accurate mass measurements from an ESI oaTOF mass spectrometer, possible structures were postulated for these metabolites that were previously not reported in the literature. Copyright 2006 John Wiley & Sons, Ltd.

  10. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  11. Automated method for identification and artery-venous classification of vessel trees in retinal vessel networks.

    PubMed

    Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D

    2014-01-01

    The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.

  12. Evaluation of the Biolog automated microbial identification system

    NASA Technical Reports Server (NTRS)

    Klingler, J. M.; Stowe, R. P.; Obenhuber, D. C.; Groves, T. O.; Mishra, S. K.; Pierson, D. L.

    1992-01-01

    Biolog's identification system was used to identify 39 American Type Culture Collection reference taxa and 45 gram-negative isolates from water samples. Of the reference strains, 98% were identified to genus level and 76% to species level within 4 to 24 h. Identification of some authentic strains of Enterobacter, Klebsiella, and Serratia was unreliable. A total of 93% of the water isolates were identified.

  13. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment.

    PubMed

    Keller, Mark; Naue, Jana; Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols.

  14. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  15. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    PubMed Central

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions. PMID:27049388

  16. Semi-automated identification of leopard frogs

    USGS Publications Warehouse

    Petrovska-Delacrétaz, Dijana; Edwards, Aaron; Chiasson, John; Chollet, Gérard; Pilliod, David S.

    2014-01-01

    Principal component analysis is used to implement a semi-automatic recognition system to identify recaptured northern leopard frogs (Lithobates pipiens). Results of both open set and closed set experiments are given. The presented algorithm is shown to provide accurate identification of 209 individual leopard frogs from a total set of 1386 images.

  17. Space station automation study: Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.

  18. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 6: Environmental analysis

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The results of the analysis of the external environment of the FBI Fingerprint Identification Division are presented. Possible trends in the future environment of the Division that may have an effect on the work load were projected to determine if future work load will lie within the capability range of the proposed new system, AIDS 3. Two working models of the environment were developed, the internal and external model, and from these scenarios the projection of possible future work load volume and mixture was developed. Possible drivers of work load change were identified and assessed for upper and lower bounds of effects. Data used for the study were derived from historical information, analysis of the current situation and from interviews with various agencies who are users of or stakeholders in the present system.

  19. Intelligent systems approach for automated identification of individual control behavior of a human operator

    NASA Astrophysics Data System (ADS)

    Zaychik, Kirill B.

    Acceptable results have been obtained using conventional techniques to model the generic human operator's control behavior. However, little research has been done in an attempt to identify an individual based on his/her control behavior. The main hypothesis investigated in this dissertation is that different operators exhibit different control behavior when performing a given control task. Furthermore, inter-person differences are manifested in the amplitude and frequency content of the non-linear component of the control behavior. Two enhancements to the existing models of the human operator, which allow personalization of the modeled control behavior, are presented in this dissertation. One of the proposed enhancements accounts for the "testing" control signals, which are introduced by an operator for more accurate control of the system and/or to adjust his/her control strategy. Such enhancement uses the Artificial Neural Network (ANN), which can be fine-tuned to model the "testing" control behavior of a given individual. The other model enhancement took the form of an equiripple filter (EF), which conditions the power spectrum of the control signal before it is passed through the plant dynamics block. The filter design technique uses Parks-McClellan algorithm, which allows parameterization of the desired levels of power at certain frequencies. A novel automated parameter identification technique (APID) was developed to facilitate the identification process of the parameters of the selected models of the human operator. APID utilizes a Genetic Algorithm (GA) based optimization engine called the Bit-climbing Algorithm (BCA). Proposed model enhancements were validated using the experimental data obtained at three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. Validation analysis involves comparison of the actual and simulated control

  20. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    PubMed Central

    Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  1. Recent trends in laboratory automation in the pharmaceutical industry.

    PubMed

    Rutherford, M L; Stinger, T

    2001-05-01

    The impact of robotics and automation on the pharmaceutical industry over the last two decades has been significant. In the last ten years, the emphasis of laboratory automation has shifted from the support of manufactured products and quality control of laboratory applications, to research and development. This shift has been the direct result of an increased emphasis on the identification, development and eventual marketing of innovative new products. In this article, we will briefly identify and discuss some of the current trends in laboratory automation in the pharmaceutical industry as they apply to research and development, including screening, sample management, combinatorial chemistry, ADME/Tox and pharmacokinetics.

  2. Automation in the clinical microbiology laboratory.

    PubMed

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. [Standardization of operation monitoring and control of the clinical laboratory automation system].

    PubMed

    Tao, R

    2000-10-01

    Laboratory automation systems showed up in the 1980s and have been introduced to many clinical laboratories since early 1990s. Meanwhile, it was found that the difference in the specimen tube dimensions, specimen identification formats, specimen carrier transportation equipment architecture, electromechanical interfaces between the analyzers and the automation systems was preventing the systems from being introduced to a wider extent. To standardize the different interfaces and reduce the cost of laboratory automation, NCCLS and JCCLS started establishing standards for laboratory automation in 1996 and 1997 respectively. Operation monitoring and control of the laboratory automation system have been included in their activities, resulting in the publication of an NCCLS proposed standard in 1999.

  4. Automated Analysis of Fluorescence Microscopy Images to Identify Protein-Protein Interactions

    DOE PAGES

    Venkatraman, S.; Doktycz, M. J.; Qi, H.; ...

    2006-01-01

    The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features are used to relay quantitative values that aid in the automated scoring of positive interactions. Experimental observations indicate that identifying at least 50% positive cells in an image is sufficient to detect a protein interaction.more » Based on this criterion, the automated system presents 100% accuracy in detecting positive interactions for a dataset of 16 images. Algorithms were implemented using MATLAB and the software developed is available on request from the authors.« less

  5. Identification of benzothiazoles as potential polyglutamine aggregation inhibitors of Huntington's disease by using an automated filter retardation assay

    PubMed Central

    Heiser, Volker; Engemann, Sabine; Bröcker, Wolfgang; Dunkel, Ilona; Boeddrich, Annett; Waelter, Stephanie; Nordhoff, Eddi; Lurz, Rudi; Schugardt, Nancy; Rautenberg, Susanne; Herhaus, Christian; Barnickel, Gerhard; Böttcher, Henning; Lehrach, Hans; Wanker, Erich E.

    2002-01-01

    Preventing the formation of insoluble polyglutamine containing protein aggregates in neurons may represent an attractive therapeutic strategy to ameliorate Huntington's disease (HD). Therefore, the ability to screen for small molecules that suppress the self-assembly of huntingtin would have potential clinical and significant research applications. We have developed an automated filter retardation assay for the rapid identification of chemical compounds that prevent HD exon 1 protein aggregation in vitro. Using this method, a total of 25 benzothiazole derivatives that inhibit huntingtin fibrillogenesis in a dose-dependent manner were discovered from a library of ≈184,000 small molecules. The results obtained by the filter assay were confirmed by immunoblotting, electron microscopy, and mass spectrometry. Furthermore, cell culture studies revealed that 2-amino-4,7-dimethyl-benzothiazol-6-ol, a chemical compound similar to riluzole, significantly inhibits HD exon 1 aggregation in vivo. These findings may provide the basis for a new therapeutic approach to prevent the accumulation of insoluble protein aggregates in Huntington's disease and related glutamine repeat disorders. PMID:12200548

  6. Framework for Human-Automation Collaboration: Conclusions from Four Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya L.; O'Hara, John

    The Human Automation Collaboration (HAC) research project is investigating how advanced technologies that are planned for Advanced Small Modular Reactors (AdvSMR) will affect the performance and the reliability of the plant from a human factors and human performance perspective. The HAC research effort investigates the consequences of allocating functions between the operators and automated systems. More specifically, the research team is addressing how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. Oxstrand et al. (2013 - March) describes the efforts conductedmore » by the researchers to identify the research needs for HAC. The research team reviewed the literature on HAC, developed a model of HAC, and identified gaps in the existing knowledge of human-automation collaboration. As described in Oxstrand et al. (2013 – June), the team then prioritized the research topics identified based on the specific needs in the context of AdvSMR. The prioritization was based on two sources of input: 1) The preliminary functions and tasks, and 2) The model of HAC. As a result, three analytical studies were planned and conduced; 1) Models of Teamwork, 2) Standardized HAC Performance Measurement Battery, and 3) Initiators and Triggering Conditions for Adaptive Automation. Additionally, one field study was also conducted at Idaho Falls Power.« less

  7. Automated Purgatoid Identification: Final Report

    NASA Technical Reports Server (NTRS)

    Wood, Steven

    2011-01-01

    Driving on Mars is hazardous: technical problems and unforeseen natural hazards can end a mission quickly at the worst, or result in long delays at best. This project is focused on helping to mitigate hazards posed to rovers by purgatoids: small (less than 1 m high, less than 10 m wide), ripple-like eolian bedforms commonly found scattered across the Meridiani Planum region of Mars. Due to the poorly consolidated nature of purgatoids and multiple past episodes of rovers getting stuck in them, identification and avoidance of these eolian bedforms is an important feature of rover path planning (NASA, 2011).

  8. Workflow Automation: A Collective Case Study

    ERIC Educational Resources Information Center

    Harlan, Jennifer

    2013-01-01

    Knowledge management has proven to be a sustainable competitive advantage for many organizations. Knowledge management systems are abundant, with multiple functionalities. The literature reinforces the use of workflow automation with knowledge management systems to benefit organizations; however, it was not known if process automation yielded…

  9. Mugshot Identification Database (MID)

    National Institute of Standards and Technology Data Gateway

    NIST Mugshot Identification Database (MID) (Web, free access)   NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size using lossless compression. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  10. Developing and Evaluating an Automated All-Cause Harm Trigger System.

    PubMed

    Sammer, Christine; Miller, Susanne; Jones, Cason; Nelson, Antoinette; Garrett, Paul; Classen, David; Stockwell, David

    2017-04-01

    From 2009 through 2012, the Adventist Health System Patient Safety Organization (AHS PSO) used the Global Trigger Tool method for harm identification and demonstrated harm reduction. Although the awareness of harm demonstrated opportunities for improvement across the system, leaders determined that the human and fiscal resources required to continue with a retrospective manual harm identification process were unsustainable. In addition, there was growing concern that the identification of harm after the patient's discharge did not allow for intervention during the hospital stay. Therefore, the AHS PSO decided to seek an alternative method for patient harm identification. The AHS PSO and another PSO jointly developed a novel automated all-cause harm trigger identification system that allowed for real-time bedside intervention, real-time trend analysis affecting patient safety, and continued learning about harm measurement. A sociotechnical approach of people, process, and technology was used at two pilot hospitals sharing the same electronic health record platform. Automated positive harm triggers and work-flow models were developed and evaluated. Combined data from the two hospitals in a period of 11 consecutive months indicated (1) a total of 2,696 harms (combined hospital-acquired and outside-acquired); (2) that hypoglycemia (blood glucose ≤ 40 mg/dL) was the most frequently identified harm; (3) 256 harms related to the Patient Safety Indicator 90 (PSI 90) Composite descriptions versus 77 harms reported to regulatory harm reduction programs; and (4) that almost one third (32%) of total harms were classified as outside-acquired. The automated harm trigger system revealed not only more harm but a broader scope of harm and led to a deeper understanding of patient safety vulnerabilities. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Development of an automated asbestos counting software based on fluorescence microscopy.

    PubMed

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  12. Advantages and challenges in automated apatite fission track counting

    NASA Astrophysics Data System (ADS)

    Enkelmann, E.; Ehlers, T. A.

    2012-04-01

    Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for

  13. An Automated Detection System for Microaneurysms That Is Effective across Different Racial Groups.

    PubMed

    Saleh, George Michael; Wawrzynski, James; Caputo, Silvestro; Peto, Tunde; Al Turk, Lutfiah Ismail; Wang, Su; Hu, Yin; Da Cruz, Lyndon; Smith, Phil; Tang, Hongying Lilian

    2016-01-01

    Patients without diabetic retinopathy (DR) represent a large proportion of the caseload seen by the DR screening service so reliable recognition of the absence of DR in digital fundus images (DFIs) is a prime focus of automated DR screening research. We investigate the use of a novel automated DR detection algorithm to assess retinal DFIs for absence of DR. A retrospective, masked, and controlled image-based study was undertaken. 17,850 DFIs of patients from six different countries were assessed for DR by the automated system and by human graders. The system's performance was compared across DFIs from the different countries/racial groups. The sensitivities for detection of DR by the automated system were Kenya 92.8%, Botswana 90.1%, Norway 93.5%, Mongolia 91.3%, China 91.9%, and UK 90.1%. The specificities were Kenya 82.7%, Botswana 83.2%, Norway 81.3%, Mongolia 82.5%, China 83.0%, and UK 79%. There was little variability in the calculated sensitivities and specificities across the six different countries involved in the study. These data suggest the possible scalability of an automated DR detection platform that enables rapid identification of patients without DR across a wide range of races.

  14. Transformation From a Conventional Clinical Microbiology Laboratory to Full Automation.

    PubMed

    Moreno-Camacho, José L; Calva-Espinosa, Diana Y; Leal-Leyva, Yoseli Y; Elizalde-Olivas, Dolores C; Campos-Romero, Abraham; Alcántar-Fernández, Jonathan

    2017-12-22

    To validate the performance, reproducibility, and reliability of BD automated instruments in order to establish a fully automated clinical microbiology laboratory. We used control strains and clinical samples to assess the accuracy, reproducibility, and reliability of the BD Kiestra WCA, the BD Phoenix, and BD Bruker MALDI-Biotyper instruments and compared them to previously established conventional methods. The following processes were evaluated: sample inoculation and spreading, colony counts, sorting of cultures, antibiotic susceptibility test, and microbial identification. The BD Kiestra recovered single colonies in less time than conventional methods (e.g. E. coli, 7h vs 10h, respectively) and agreement between both methodologies was excellent for colony counts (κ=0.824) and sorting cultures (κ=0.821). Antibiotic susceptibility tests performed with BD Phoenix and disk diffusion demonstrated 96.3% agreement with both methods. Finally, we compared microbial identification in BD Phoenix and Bruker MALDI-Biotyper and observed perfect agreement (κ=1) and identification at a species level for control strains. Together these instruments allow us to process clinical urine samples in 36h (effective time). The BD automated technologies have improved performance compared with conventional methods, and are suitable for its implementation in very busy microbiology laboratories. © American Society for Clinical Pathology 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. Using Pareto points for model identification in predictive toxicology

    PubMed Central

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  16. Performance of an automated solid-phase red cell adherence system compared with that of a manual gel microcolumn assay for the identification of antibodies eluted from red blood cells.

    PubMed

    Finck, R H; Davis, R J; Teng, S; Goldfinger, D; Ziman, A F; Lu, Q; Yuan, S

    2011-01-01

    IgG antibodies coating red blood cells (RBCs) can be removed by elution procedures and their specificity determined by antibody identification studies. Although such testing is traditionally performed using the tube agglutination assay, prior studies have shown that the gel microcolumn (GMC) assay may also be used with comparable results. The purpose of this study was to compare an automated solid-phase red cell adherence (SPRCA) system with a GMC assay for the detection of antibodies eluted from RBCs. Acid eluates from 51 peripheral blood (PB) and 7 cord blood (CB) samples were evaluated by both an automated SPRCA instrument and a manual GMC assay. The concordance rate between the two systems for peripheral RBC samples was 88.2 percent (45 of 51), including cases with alloantibodies (n = 8), warm autoantibodies (n = 12), antibodies with no identifiable specificity (n = 2), and negative results (n = 23). There were six discordant cases, of which four had alloantibodies (including anti-Jka, -E, and -e) demonstrable by the SPRCA system only. In the remaining 2 cases, anti-Fya and antibodies with no identifiable specificity were demonstrable by the GMC assay only. All seven CB specimens produced concordant results, showing anti-A (n = 3), -B (n = 1), maternal anti-Jka (n = 2), or a negative result (n = 1). Automated SPRCA technology has a performance that is comparable with that of a manual GMC assay for identifying antibodies eluted from PB and CB RBCs.

  17. Automated patient identification and localization error detection using 2-dimensional to 3-dimensional registration of kilovoltage x-ray setup images.

    PubMed

    Lamb, James M; Agazaryan, Nzhde; Low, Daniel A

    2013-10-01

    To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Automated measurements for individualized heart rate correction of the QT interval.

    PubMed

    Mason, Jay W; Moon, Thomas E

    2015-04-01

    Subject-specific electrocardiographic QT interval correction for heart rate is often used in clinical trials with frequent electrocardiographic recordings. However, in these studies relatively few 10-s, 12-lead electrocardiograms may be available for calculating the individual correction. Highly automated QT and RR measurement tools have made it practical to measure electrocardiographic intervals on large volumes of continuous electrocardiogram data. The purpose of this study was to determine whether an automated method can be used in lieu of a manual method. In 49 subjects who completed all treatments in a four-armed crossover study we compared two methods for derivation of individualized rate-correction coefficients: manual measurement on 10-s electrocardiograms and automated measurement of QT and RR during continuous 24-h electrocardiogram recordings. The four treatments, received by each subject in a latin-square randomization sequence were placebo, moxifloxacin, and two doses of an investigational drug. Analysis of continuous electrocardiogram data yielded a lower standard deviation of QT:RR regression values than the manual method, though the differences were not statistically significant. The within-subject and within-treatment coefficients of variation between the manual and automated methods were not significantly different. Corrected QT values from the two methods had similar rates of true and false positive identification of moxifloxacin's QT prolonging effect. An automated method for individualized rate correction applied to continuous electrocardiogram data could be advantageous in clinical trials, as the automated method is simpler, is based upon a much larger volume of data, yields similar results, and requires no human over-reading of the measurements. © The Author(s) 2015.

  19. Automation in clinical microbiology: a new approach to identifying micro-organisms by automated pattern matching of proteins labelled with 35S-methionine.

    PubMed Central

    Tabaqchali, S; Silman, R; Holland, D

    1987-01-01

    A new rapid automated method for the identification and classification of microorganisms is described. It is based on the incorporation of 35S-methionine into cellular proteins and subsequent separation of the radiolabelled proteins by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE). The protein patterns produced were species specific and reproducible, permitting discrimination between the species. A large number of Gram negative and Gram positive aerobic and anaerobic organisms were successfully tested. Furthermore, there were sufficient differences within species between the protein profiles to permit subdivision of the species. New typing schemes for Clostridium difficile, coagulase negative staphylococci, and Staphylococcus aureus, including the methicillin resistant strains, could thus be introduced; this has provided the basis for useful epidemiological studies. To standardise and automate the procedure an automated electrophoresis system and a two dimensional scanner were developed to scan the dried gels directly. The scanner is operated by a computer which also stores and analyses the scan data. Specific histograms are produced for each bacterial species. Pattern recognition software is used to construct databases and to compare data obtained from different gels: in this way duplicate "unknowns" can be identified. Specific small areas showing differences between various histograms can also be isolated and expanded to maximise the differences, thus providing differentiation between closely related bacterial species and the identification of differences within the species to provide new typing schemes. This system should be widely applied in clinical microbiology laboratories in the near future. Images Fig 1 Fig 2 Fig 3 Fig 4 Fig 5 Fig 6 Fig 7 Fig 8 PMID:3312300

  20. Automated detection of diabetic retinopathy: barriers to translation into clinical practice.

    PubMed

    Abramoff, Michael D; Niemeijer, Meindert; Russell, Stephen R

    2010-03-01

    Automated identification of diabetic retinopathy (DR), the primary cause of blindness and visual loss for those aged 18-65 years, from color images of the retina has enormous potential to increase the quality, cost-effectiveness and accessibility of preventative care for people with diabetes. Through advanced image analysis techniques, retinal images are analyzed for abnormalities that define and correlate with the severity of DR. Translating automated DR detection into clinical practice will require surmounting scientific and nonscientific barriers. Scientific concerns, such as DR detection limits compared with human experts, can be studied and measured. Ethical, legal and political issues can be addressed, but are difficult or impossible to measure. The primary objective of this review is to survey the methods, potential benefits and limitations of automated detection in order to better manage translation into clinical practice, based on extensive experience with the systems we have developed.

  1. Standards for space automation and robotics

    NASA Technical Reports Server (NTRS)

    Kader, Jac B.; Loftin, R. B.

    1992-01-01

    The AIAA's Committee on Standards for Space Automation and Robotics (COS/SAR) is charged with the identification of key functions and critical technologies applicable to multiple missions that reflect fundamental consideration of environmental factors. COS/SAR's standards/practices/guidelines implementation methods will be based on reliability, performance, and operations, as well as economic viability and life-cycle costs, simplicity, and modularity.

  2. Automated structure solution, density modification and model building.

    PubMed

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  3. FBI fingerprint identification automation study. AIDS 3 evaluation report. Volume 4: Economic feasibility

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The results of the economic analysis of the AIDS 3 system design are presented. AIDS 3 evaluated a set of economic feasibility measures including life cycle cost, implementation cost, annual operating expenditures and annual capital expenditures. The economic feasibility of AIDS 3 was determined by comparing the evaluated measures with the same measures, where applicable, evaluated for the current system. A set of future work load scenarios was constructed using JPL's environmental evaluation study of the fingerprint identification system. AIDS 3 and the current system were evaluated for each of the economic feasibility measures for each of the work load scenarios. They were compared for a set of performance measures, including response time and accuracy, and for a set of cost/benefit ratios, including cost per transaction and cost per technical search. Benefit measures related to the economic feasibility of the system are also presented, including the required number of employees and the required employee skill mix.

  4. Development of full-field optical spatial coherence tomography system for automated identification of malaria using the multilevel ensemble classifier.

    PubMed

    Singla, Neeru; Srivastava, Vishal; Mehta, Dalip Singh

    2018-05-01

    Malaria is a life-threatening infectious blood disease affecting humans and other animals caused by parasitic protozoans belonging to the Plasmodium type especially in developing countries. The gold standard method for the detection of malaria is through the microscopic method of chemically treated blood smears. We developed an automated optical spatial coherence tomographic system using a machine learning approach for a fast identification of malaria cells. In this study, 28 samples (15 healthy, 13 malaria infected stages of red blood cells) were imaged by the developed system and 13 features were extracted. We designed a multilevel ensemble-based classifier for the quantitative prediction of different stages of the malaria cells. The proposed classifier was used by repeating k-fold cross validation dataset and achieve a high-average accuracy of 97.9% for identifying malaria infected late trophozoite stage of cells. Overall, our proposed system and multilevel ensemble model has a substantial quantifiable potential to detect the different stages of malaria infection without staining or expert. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Automated systems to identify relevant documents in product risk management

    PubMed Central

    2012-01-01

    Background Product risk management involves critical assessment of the risks and benefits of health products circulating in the market. One of the important sources of safety information is the primary literature, especially for newer products which regulatory authorities have relatively little experience with. Although the primary literature provides vast and diverse information, only a small proportion of which is useful for product risk assessment work. Hence, the aim of this study is to explore the possibility of using text mining to automate the identification of useful articles, which will reduce the time taken for literature search and hence improving work efficiency. In this study, term-frequency inverse document-frequency values were computed for predictors extracted from the titles and abstracts of articles related to three tumour necrosis factors-alpha blockers. A general automated system was developed using only general predictors and was tested for its generalizability using articles related to four other drug classes. Several specific automated systems were developed using both general and specific predictors and training sets of different sizes in order to determine the minimum number of articles required for developing such systems. Results The general automated system had an area under the curve value of 0.731 and was able to rank 34.6% and 46.2% of the total number of 'useful' articles among the first 10% and 20% of the articles presented to the evaluators when tested on the generalizability set. However, its use may be limited by the subjective definition of useful articles. For the specific automated system, it was found that only 20 articles were required to develop a specific automated system with a prediction performance (AUC 0.748) that was better than that of general automated system. Conclusions Specific automated systems can be developed rapidly and avoid problems caused by subjective definition of useful articles. Thus the efficiency of

  6. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  7. Comparability of automated human induced pluripotent stem cell culture: a pilot study.

    PubMed

    Archibald, Peter R T; Chandra, Amit; Thomas, Dave; Chose, Olivier; Massouridès, Emmanuelle; Laâbi, Yacine; Williams, David J

    2016-12-01

    Consistent and robust manufacturing is essential for the translation of cell therapies, and the utilisation automation throughout the manufacturing process may allow for improvements in quality control, scalability, reproducibility and economics of the process. The aim of this study was to measure and establish the comparability between alternative process steps for the culture of hiPSCs. Consequently, the effects of manual centrifugation and automated non-centrifugation process steps, performed using TAP Biosystems' CompacT SelecT automated cell culture platform, upon the culture of a human induced pluripotent stem cell (hiPSC) line (VAX001024c07) were compared. This study, has demonstrated that comparable morphologies and cell diameters were observed in hiPSCs cultured using either manual or automated process steps. However, non-centrifugation hiPSC populations exhibited greater cell yields, greater aggregate rates, increased pluripotency marker expression, and decreased differentiation marker expression compared to centrifugation hiPSCs. A trend for decreased variability in cell yield was also observed after the utilisation of the automated process step. This study also highlights the detrimental effect of the cryopreservation and thawing processes upon the growth and characteristics of hiPSC cultures, and demonstrates that automated hiPSC manufacturing protocols can be successfully transferred between independent laboratories.

  8. Cell-Detection Technique for Automated Patch Clamping

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2008-01-01

    A unique and customizable machinevision and image-data-processing technique has been developed for use in automated identification of cells that are optimal for patch clamping. [Patch clamping (in which patch electrodes are pressed against cell membranes) is an electrophysiological technique widely applied for the study of ion channels, and of membrane proteins that regulate the flow of ions across the membranes. Patch clamping is used in many biological research fields such as neurobiology, pharmacology, and molecular biology.] While there exist several hardware techniques for automated patch clamping of cells, very few of those techniques incorporate machine vision for locating cells that are ideal subjects for patch clamping. In contrast, the present technique is embodied in a machine-vision algorithm that, in practical application, enables the user to identify good and bad cells for patch clamping in an image captured by a charge-coupled-device (CCD) camera attached to a microscope, within a processing time of one second. Hence, the present technique can save time, thereby increasing efficiency and reducing cost. The present technique involves the utilization of cell-feature metrics to accurately make decisions on the degree to which individual cells are "good" or "bad" candidates for patch clamping. These metrics include position coordinates (x,y) in the image plane, major-axis length, minor-axis length, area, elongation, roundness, smoothness, angle of orientation, and degree of inclusion in the field of view. The present technique does not require any special hardware beyond commercially available, off-the-shelf patch-clamping hardware: A standard patchclamping microscope system with an attached CCD camera, a personal computer with an imagedata- processing board, and some experience in utilizing imagedata- processing software are all that are needed. A cell image is first captured by the microscope CCD camera and image-data-processing board, then the image

  9. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    PubMed

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  10. Space station automation study: Autonomous systems and assembly, volume 2

    NASA Technical Reports Server (NTRS)

    Bradford, K. Z.

    1984-01-01

    This final report, prepared by Martin Marietta Denver Aerospace, provides the technical results of their input to the Space Station Automation Study, the purpose of which is to develop informed technical guidance in the use of autonomous systems to implement space station functions, many of which can be programmed in advance and are well suited for automated systems.

  11. Automated reuseable components system study results

    NASA Technical Reports Server (NTRS)

    Gilroy, Kathy

    1989-01-01

    The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.

  12. Space station automation study. Volume 1: Executive summary. Autonomous systems and assembly

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The space station automation study (SSAS) was to develop informed technical guidance for NASA personnel in the use of autonomy and autonomous systems to implement space station functions. The initial step taken by NASA in organizing the SSAS was to form and convene a panel of recognized expert technologists in automation, space sciences and aerospace engineering to produce a space station automation plan.

  13. Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.

    NASA Astrophysics Data System (ADS)

    Gader, P.

    2016-12-01

    The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for

  14. Full-text automated detection of surgical site infections secondary to neurosurgery in Rennes, France.

    PubMed

    Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc

    2013-01-01

    The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.

  15. Automated identification of stream-channel geomorphic features from high‑resolution digital elevation models in West Tennessee watersheds

    USGS Publications Warehouse

    Cartwright, Jennifer M.; Diehl, Timothy H.

    2017-01-17

    High-resolution digital elevation models (DEMs) derived from light detection and ranging (lidar) enable investigations of stream-channel geomorphology with much greater precision than previously possible. The U.S. Geological Survey has developed the DEM Geomorphology Toolbox, containing seven tools to automate the identification of sites of geomorphic instability that may represent sediment sources and sinks in stream-channel networks. These tools can be used to modify input DEMs on the basis of known locations of stormwater infrastructure, derive flow networks at user-specified resolutions, and identify possible sites of geomorphic instability including steep banks, abrupt changes in channel slope, or areas of rough terrain. Field verification of tool outputs identified several tool limitations but also demonstrated their overall usefulness in highlighting likely sediment sources and sinks within channel networks. In particular, spatial clusters of outputs from multiple tools can be used to prioritize field efforts to assess and restore eroding stream reaches.

  16. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  17. Performance of VITEK® MS V3.0 for the Identification of Mycobacterium species from Patient Samples using Automated Liquid Media Systems.

    PubMed

    Miller, Eric; Cantrell, Christopher; Beard, Melodie; Derylak, Andrew; Babady, N Esther; McMillen, Tracy; Miranda, Edwin; Body, Barbara; Tang, Yi-Wei; Vasireddy, Ravikiran; Vasireddy, Sruthi; Smith, Terry; Iakhiaeva, Elena; Wallace, Richard J; Brown-Elliott, Barbara A; Moreno, Erik; Totty, Heather; Deol, Parampal

    2018-06-06

    The accuracy and robustness of the VITEK® MS V3.0 matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) system was evaluated by identifying mycobacteria from automated liquid media systems using patient samples. This is the first report demonstrating that proteins within the liquid media, its supplements, and decontamination reagents for non-sterile patient samples do not generate misidentification or false positive results when using the VITEK MS V3.0 system. Prior to testing with patient samples, a seeded study was conducted to challenge the accuracy of the VITEK MS to identify mycobacteria from liquid media through mimicking a clinical workflow. A total of 77 Mycobacterium strains representing 21 species, seeded in simulated sputum, were decontaminated and inoculated into BACT/ALERT®MP liquid culture medium, incubated until positivity, and identified using VITEK MS. A total of 383 liquid cultures were tested of which 379 (99%) identified correctly to the species/complex/group; four (1%) obtained a No Identification, and no misidentifications were observed. Following the simulated sputum study, a total of 73 smear-positive liquid medium cultures detected using BD BBL™ MGIT™ and VersaTREK® MYCO liquid media were identified by VITEK MS. Sixty-four (87.7%) correctly identified to the species/complex/group level; seven (9.6%) resulted as No Identification, and two (2.7%) misidentified at the species level. These results indicate the VITEK MS V3.0 is an accurate tool for routine diagnostics of Mycobacterium species isolated from liquid cultures. Copyright © 2018 American Society for Microbiology.

  18. Reduction in Hospital-Wide Clinical Laboratory Specimen Identification Errors following Process Interventions: A 10-Year Retrospective Observational Study

    PubMed Central

    Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan

    2016-01-01

    Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020

  19. Contemporary microbiology and identification of Corynebacteria spp. causing infections in human.

    PubMed

    Zasada, A A; Mosiej, E

    2018-06-01

    The Corynebacterium is a genus of bacteria of growing clinical importance. Progress in medicine results in growing population of immunocompromised patients and growing number of infections caused by opportunistic pathogens. A new infections caused by new Corynebacterium species and species previously regarded as commensal micro-organisms have been described. Parallel with changes in Corynebacteria infections, the microbiological laboratory diagnostic possibilities are changing. But identification of this group of bacteria to the species level remains difficult. In the paper, we present various manual, semi-automated and automated assays used in clinical laboratories for Corynebacterium identification, such as API Coryne, RapID CB Plus, BBL Crystal Gram Positive ID System, MICRONAUT-RPO, VITEK 2, BD Phoenix System, Sherlock Microbial ID System, MicroSeq Microbial Identification System, Biolog Microbial Identification Systems, MALDI-TOF MS systems, polymerase chain reaction (PCR)-based and sequencing-based assays. The presented assays are based on various properties, like biochemical tests, specific DNA sequences, composition of cellular fatty acids, protein profiles and have specific limitations. The number of opportunistic infections caused by Corynebacteria is increasing due to increase in number of immunocompromised patients. New Corynebacterium species and new human infections, caused by this group of bacteria, has been described recently. However, identification of Corynebacteria is still a challenge despite application of sophisticated laboratory methods. In the study we present possibilities and limitations of various commercial systems for identification of Corynebacteria. © 2018 The Society for Applied Microbiology.

  20. Automation in high-content flow cytometry screening.

    PubMed

    Naumann, U; Wand, M P

    2009-09-01

    High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification.

  1. Automated surface quality inspection with ARGOS: a case study

    NASA Astrophysics Data System (ADS)

    Kiefhaber, Daniel; Etzold, Fabian; Warken, Arno F.; Asfour, Jean-Michel

    2017-06-01

    The commercial availability of automated inspection systems for optical surfaces specified according to ISO 10110-7 promises unsupervised and automated quality control with reproducible results. In this study, the classification results of the ARGOS inspection system are compared to the decisions by well-trained inspectors based on manual-visual inspection. Both are found to agree in 93.6% of the studied cases. Exemplary cases with differing results are studied, and shown to be partly caused by shortcomings of the ISO 10110-7 standard, which was written for the industry standard manual-visual inspection. Applying it to high resolution images of the whole surface of objective machine vision systems brings with it a few challenges which are discussed.

  2. Space station automation study-satellite servicing. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A plan for advancing the state of automation and robotics technology as an integral part of the U.S. space station development effort was studied. This study was undertaken: (1) to determine the benefits that will accrue from using automated systems onboard the space station in support of satellite servicing; (2) to define methods for increasing the capacity for, and effectiveness of satellite servicing while reducing demands on crew time and effort and on ground support; (3) to find optimum combinations of men/machine activities in the performance of servicing functions; and (4) project the evolution of automation technology needed to enhance or enable satellite servicing capabilities to match the evolutionary growth of the space station. A secondary intent is to accelerate growth and utilization of robotics in terrestrial applications as a spin-off from the space station program.

  3. Pilot study analyzing automated ECG screening of hypertrophic cardiomyopathy.

    PubMed

    Campbell, Matthew J; Zhou, Xuefu; Han, Chia; Abrishami, Hedayat; Webster, Gregory; Miyake, Christina Y; Sower, Christopher T; Anderson, Jeffrey B; Knilans, Timothy K; Czosek, Richard J

    2017-06-01

    Hypertrophic cardiomyopathy (HCM) is one of the leading causes of sudden cardiac death in athletes. However, preparticipation ECG screening has often been criticized for failing to meet cost-effectiveness thresholds, in part because of high false-positive rates and the cost of ECG screening itself. The purpose of this study was to assess the testing characteristics of an automated ECG algorithm designed to screen for HCM in a multi-institutional pediatric cohort. ECGs from patients with HCM aged 12 to 20 years from 3 pediatric institutions were screened for ECG criteria for HCM using a previously described automated computer algorithm developed specifically for HCM ECG screening. The results were compared to a known healthy pediatric cohort. The studies then were read by trained electrophysiologists using standard ECG criteria and compared to the results of automated screening. One hundred twenty-eight ECGs from unique patients with phenotypic HCM were obtained and compared with 256 studies from healthy control patients matched in 2:1 fashion. When presented with the ECGs, the non-voltage-based algorithm resulted in 81.2% sensitivity and 90.7% specificity. A trained electrophysiologist read the same data according to the Seattle Criteria, with 71% sensitivity with 95.7% specificity. The sensitivity of screening as well as the components of the ECG screening itself varied by institution. This pilot study demonstrates a potential for automated ECG screening algorithms to detect HCM with testing characteristics similar to that of a trained electrophysiologist. In addition, there appear to be differences in ECG characteristics between patient populations, which may account for the difficulties in universal screening. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  4. Library Automation in Sub Saharan Africa: Case Study of the University of Botswana

    ERIC Educational Resources Information Center

    Mutula, Stephen Mudogo

    2012-01-01

    Purpose: This article aims to present experiences and the lessons learned from the University of Botswana (UB) library automation project. The implications of the project for similar libraries planning automation in sub Saharan Africa and beyond are adduced. Design/methodology/approach: The article is a case study of library automation at the…

  5. Multi-scale curvature for automated identification of glaciated mountain landscapes

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar

    2014-03-01

    Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes.

  6. Multi-scale curvature for automated identification of glaciated mountain landscapes☆

    PubMed Central

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar

    2014-01-01

    Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes. PMID:24748703

  7. A computerized method for automated identification of erect posteroanterior and supine anteroposterior chest radiographs

    NASA Astrophysics Data System (ADS)

    Kao, E.-Fong; Lin, Wei-Chen; Hsu, Jui-Sheng; Chou, Ming-Chung; Jaw, Twei-Shiun; Liu, Gin-Chung

    2011-12-01

    A computerized scheme was developed for automated identification of erect posteroanterior (PA) and supine anteroposterior (AP) chest radiographs. The method was based on three features, the tilt angle of the scapula superior border, the tilt angle of the clavicle and the extent of radiolucence in lung fields, to identify the view of a chest radiograph. The three indices Ascapula, Aclavicle and Clung were determined from a chest image for the three features. Linear discriminant analysis was used to classify PA and AP chest images based on the three indices. The performance of the method was evaluated by receiver operating characteristic analysis. The proposed method was evaluated using a database of 600 PA and 600 AP chest radiographs. The discriminant performances Az of Ascapula, Aclavicle and Clung were 0.878 ± 0.010, 0.683 ± 0.015 and 0.962 ± 0.006, respectively. The combination of the three indices obtained an Az value of 0.979 ± 0.004. The results indicate that the combination of the three indices could yield high discriminant performance. The proposed method could provide radiologists with information about the view of chest radiographs for interpretation or could be used as a preprocessing step for analyzing chest images.

  8. A semi-automated measurement technique for the assessment of radiolucency.

    PubMed

    Pegg, E C; Kendrick, B J L; Pandit, H G; Gill, H S; Murray, D W

    2014-07-06

    The assessment of radiolucency around an implant is qualitative, poorly defined and has low agreement between clinicians. Accurate and repeatable assessment of radiolucency is essential to prevent misdiagnosis, minimize cases of unnecessary revision, and to correctly monitor and treat patients at risk of loosening and implant failure. The purpose of this study was to examine whether a semi-automated imaging algorithm could improve repeatability and enable quantitative assessment of radiolucency. Six surgeons assessed 38 radiographs of knees after unicompartmental knee arthroplasty for radiolucency, and results were compared with assessments made by the semi-automated program. Large variation was found between the surgeon results, with total agreement in only 9.4% of zones and a kappa value of 0.602; whereas the automated program had total agreement in 81.6% of zones and a kappa value of 0.802. The software had a 'fair to excellent' prediction of the presence or the absence of radiolucency, where the area under the curve of the receiver operating characteristic curves was 0.82 on average. The software predicted radiolucency equally well for cemented and cementless implants (p = 0.996). The identification of radiolucency using an automated method is feasible and these results indicate that it could aid the definition and quantification of radiolucency.

  9. Automated protein NMR structure determination using wavelet de-noised NOESY spectra.

    PubMed

    Dancea, Felician; Günther, Ulrich

    2005-11-01

    A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.

  10. Automated identification of social interaction criteria in Drosophila melanogaster.

    PubMed

    Schneider, J; Levine, J D

    2014-10-01

    The study of social behaviour within groups has relied on fixed definitions of an 'interaction'. Criteria used in these definitions often involve a subjectively defined cut-off value for proximity, orientation and time (e.g. courtship, aggression and social interaction networks) and the same numerical values for these criteria are applied to all of the treatment groups within an experiment. One universal definition of an interaction could misidentify interactions within groups that differ in life histories, study treatments and/or genetic mutations. Here, we present an automated method for determining the values of interaction criteria using a pre-defined rule set rather than pre-defined values. We use this approach and show changing social behaviours in different manipulations of Drosophila melanogaster. We also show that chemosensory cues are an important modality of social spacing and interaction. This method will allow a more robust analysis of the properties of interacting groups, while helping us understand how specific groups regulate their social interaction space. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Automated pancreatic cyst screening using natural language processing: a new tool in the early detection of pancreatic cancer

    PubMed Central

    Roch, Alexandra M; Mehrabi, Saeed; Krishnan, Anand; Schmidt, Heidi E; Kesterson, Joseph; Beesley, Chris; Dexter, Paul R; Palakal, Mathew; Schmidt, C Max

    2015-01-01

    Introduction As many as 3% of computed tomography (CT) scans detect pancreatic cysts. Because pancreatic cysts are incidental, ubiquitous and poorly understood, follow-up is often not performed. Pancreatic cysts may have a significant malignant potential and their identification represents a ‘window of opportunity’ for the early detection of pancreatic cancer. The purpose of this study was to implement an automated Natural Language Processing (NLP)-based pancreatic cyst identification system. Method A multidisciplinary team was assembled. NLP-based identification algorithms were developed based on key words commonly used by physicians to describe pancreatic cysts and programmed for automated search of electronic medical records. A pilot study was conducted prospectively in a single institution. Results From March to September 2013, 566 233 reports belonging to 50 669 patients were analysed. The mean number of patients reported with a pancreatic cyst was 88/month (range 78–98). The mean sensitivity and specificity were 99.9% and 98.8%, respectively. Conclusion NLP is an effective tool to automatically identify patients with pancreatic cysts based on electronic medical records (EMR). This highly accurate system can help capture patients ‘at-risk’ of pancreatic cancer in a registry. PMID:25537257

  12. Automated Space Processing Payloads Study. Volume 1: Executive Summary

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An investigation is described which examined the extent to which the experiment hardware and operational requirements can be met by automatic control and material handling devices; payload and system concepts are defined which make extensive use of automation technology. Topics covered include experiment requirements and hardware data, capabilities and characteristics of industrial automation equipment and controls, payload grouping, automated payload conceptual design, space processing payload preliminary design, automated space processing payloads for early shuttle missions, and cost and scheduling.

  13. Automated reporting of pharmacokinetic study results: gaining efficiency downstream from the laboratory.

    PubMed

    Schaefer, Peter

    2011-07-01

    The purpose of bioanalysis in the pharmaceutical industry is to provide 'raw' data about the concentration of a drug candidate and its metabolites as input for studies of drug properties such as pharmacokinetic (PK), toxicokinetic, bioavailability/bioequivalence and other studies. Building a seamless workflow from the laboratory to final reports is an ongoing challenge for IT groups and users alike. In such a workflow, PK automation can provide companies with the means to vastly increase the productivity of their scientific staff while improving the quality and consistency of their reports on PK analyses. This report presents the concept and benefits of PK automation and discuss which features of an automated reporting workflow should be translated into software requirements that pharmaceutical companies can use to select or build an efficient and effective PK automation solution that best meets their needs.

  14. Human factors in cockpit automation: A field study of flight crew transition

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1985-01-01

    The factors which affected two groups of airline pilots in the transition from traditional airline cockpits to a highly automated version were studied. All pilots were highly experienced in traditional models of the McDonnell-Douglas DC-9 prior to their transition to the more automated DC-9-80. Specific features of the new aircraft, particularly the digital flight guidance system (DFGS) and other automatic features such as the autothrottle system (ATS), autobrake, and digital display were studied. Particular attention was paid to the first 200 hours of line flying experience in the new aircraft, and the difficulties that some pilots found in adapting to the new systems during this initial operating period. Efforts to prevent skill loss from automation, training methods, traditional human factors issues, and general views of the pilots toward cockpit automation are discussed.

  15. Performance of an Additional Task During Level 2 Automated Driving: An On-Road Study Comparing Drivers With and Without Experience With Partial Automation.

    PubMed

    Solís-Marcos, Ignacio; Ahlström, Christer; Kircher, Katja

    2018-05-01

    To investigate the influence of prior experience with Level 2 automation on additional task performance during manual and Level 2 partially automated driving. Level 2 automation is now on the market, but its effects on driver behavior remain unclear. Based on previous studies, we could expect an increase in drivers' engagement in secondary tasks during Level 2 automated driving, but it is yet unknown how drivers will integrate all the ongoing demands in such situations. Twenty-one drivers (12 without, 9 with Level 2 automation experience) drove on a highway manually and with Level 2 automation (exemplified by Volvo Pilot Assist generation 2; PA2) while performing an additional task. In half of the conditions, the task could be interrupted (self-paced), and in the other half, it could not (system-paced). Drivers' visual attention, additional task performance, and other compensatory strategies were analyzed. Driving with PA2 led to decreased scores in the additional task and more visual attention to the dashboard. In the self-paced condition, all drivers looked more to the task and perceived a lower mental demand. The drivers experienced with PA2 used the system and the task more than the novice group and performed more overtakings. The additional task interfered more with Level 2 automation than with manual driving. The drivers, particularly the automation novice drivers, used some compensatory strategies. Automation designers need to consider these potential effects in the development of future automated systems.

  16. Reduced Attention Allocation during Short Periods of Partially Automated Driving: An Event-Related Potentials Study

    PubMed Central

    Solís-Marcos, Ignacio; Galvao-Carmona, Alejandro; Kircher, Katja

    2017-01-01

    Research on partially automated driving has revealed relevant problems with driving performance, particularly when drivers’ intervention is required (e.g., take-over when automation fails). Mental fatigue has commonly been proposed to explain these effects after prolonged automated drives. However, performance problems have also been reported after just a few minutes of automated driving, indicating that other factors may also be involved. We hypothesize that, besides mental fatigue, an underload effect of partial automation may also affect driver attention. In this study, such potential effect was investigated during short periods of partially automated and manual driving and at different speeds. Subjective measures of mental demand and vigilance and performance to a secondary task (an auditory oddball task) were used to assess driver attention. Additionally, modulations of some specific attention-related event-related potentials (ERPs, N1 and P3 components) were investigated. The mental fatigue effects associated with the time on task were also evaluated by using the same measurements. Twenty participants drove in a fixed-base simulator while performing an auditory oddball task that elicited the ERPs. Six conditions were presented (5–6 min each) combining three speed levels (low, comfortable and high) and two automation levels (manual and partially automated). The results showed that, when driving partially automated, scores in subjective mental demand and P3 amplitudes were lower than in the manual conditions. Similarly, P3 amplitude and self-reported vigilance levels decreased with the time on task. Based on previous studies, these findings might reflect a reduction in drivers’ attention resource allocation, presumably due to the underload effects of partial automation and to the mental fatigue associated with the time on task. Particularly, such underload effects on attention could explain the performance decrements after short periods of automated

  17. Detect and Avoid (DAA) Automation Maneuver Study

    DTIC Science & Technology

    2017-02-01

    88ABW-2017-2261. 14. ABSTRACT The study described herein was an operator–in–the–loop assessment supporting the development of a Sense and Avoid ( SAA ...display that enables effective teaming of an Unmanned Aerial Systems (UAS) operator with an advanced SAA maneuver algorithm to safely avoid proximal...air traffic. This study examined performance differences between candidate SAA display configurations and automation thresholds while UAS operators

  18. Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piette, Mary A.; Schetrit, Oren; Kiliccote, Sila

    During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems,more » and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.« less

  19. A comparative study of quantitative immunohistochemistry and quantum dot immunohistochemistry for mutation carrier identification in Lynch syndrome.

    PubMed

    Barrow, Emma; Evans, D Gareth; McMahon, Ray; Hill, James; Byers, Richard

    2011-03-01

    Lynch Syndrome is caused by mutations in DNA mismatch repair (MMR) genes. Mutation carrier identification is facilitated by immunohistochemical detection of the MMR proteins MHL1 and MSH2 in tumour tissue and is desirable as colonoscopic screening reduces mortality. However, protein detection by conventional immunohistochemistry (IHC) is subjective, and quantitative techniques are required. Quantum dots (QDs) are novel fluorescent labels that enable quantitative multiplex staining. This study compared their use with quantitative 3,3'-diaminobenzidine (DAB) IHC for the diagnosis of Lynch Syndrome. Tumour sections from 36 mutation carriers and six controls were obtained. These were stained with DAB on an automated platform using antibodies against MLH1 and MSH2. Multiplex QD immunofluorescent staining of the sections was performed using antibodies against MLH1, MSH2 and smooth muscle actin (SMA). Multispectral analysis of the slides was performed. The staining intensity of DAB and QDs was measured in multiple colonic crypts, and the mean intensity scores calculated. Receiver operating characteristic (ROC) curves of staining performance for the identification of mutation carriers were evaluated. For quantitative DAB IHC, the area under the MLH1 ROC curve was 0.872 (95% CI 0.763 to 0.981), and the area under the MSH2 ROC curve was 0.832 (95% CI 0.704 to 0.960). For quantitative QD IHC, the area under the MLH1 ROC curve was 0.812 (95% CI 0.681 to 0.943), and the area under the MSH2 ROC curve was 0.598 (95% CI 0.418 to 0.777). Despite the advantage of QD staining to enable several markers to be measured simultaneously, it is of lower utility than DAB IHC for the identification of MMR mutation carriers. Automated DAB IHC staining and quantitative slide analysis may enable high-throughput IHC.

  20. Automated Hazard Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riddle, F. J.

    2003-06-26

    The Automated Hazard Analysis (AHA) application is a software tool used to conduct job hazard screening and analysis of tasks to be performed in Savannah River Site facilities. The AHA application provides a systematic approach to the assessment of safety and environmental hazards associated with specific tasks, and the identification of controls regulations, and other requirements needed to perform those tasks safely. AHA is to be integrated into existing Savannah River site work control and job hazard analysis processes. Utilization of AHA will improve the consistency and completeness of hazard screening and analysis, and increase the effectiveness of the workmore » planning process.« less

  1. The role of automated feedback in training and retaining biological recorders for citizen science.

    PubMed

    van der Wal, René; Sharma, Nirwan; Mellish, Chris; Robinson, Annie; Siddharthan, Advaith

    2016-06-01

    The rapid rise of citizen science, with lay people forming often extensive biodiversity sensor networks, is seen as a solution to the mismatch between data demand and supply while simultaneously engaging citizens with environmental topics. However, citizen science recording schemes require careful consideration of how to motivate, train, and retain volunteers. We evaluated a novel computing science framework that allowed for the automated generation of feedback to citizen scientists using natural language generation (NLG) technology. We worked with a photo-based citizen science program in which users also volunteer species identification aided by an online key. Feedback is provided after photo (and identification) submission and is aimed to improve volunteer species identification skills and to enhance volunteer experience and retention. To assess the utility of NLG feedback, we conducted two experiments with novices to assess short-term (single session) and longer-term (5 sessions in 2 months) learning, respectively. Participants identified a specimen in a series of photos. One group received only the correct answer after each identification, and the other group received the correct answer and NLG feedback explaining reasons for misidentification and highlighting key features that facilitate correct identification. We then developed an identification training tool with NLG feedback as part of the citizen science program BeeWatch and analyzed learning by users. Finally, we implemented NLG feedback in the live program and evaluated this by randomly allocating all BeeWatch users to treatment groups that received different types of feedback upon identification submission. After 6 months separate surveys were sent out to assess whether views on the citizen science program and its feedback differed among the groups. Identification accuracy and retention of novices were higher for those who received automated feedback than for those who received only confirmation of the

  2. Automated Identification of the Heart Wall Throughout the Entire Cardiac Cycle Using Optimal Cardiac Phase for Extracted Features

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.

  3. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  4. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    NASA Technical Reports Server (NTRS)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  5. Comparison of manual versus automated data collection method for an evidence-based nursing practice study.

    PubMed

    Byrne, M D; Jordan, T R; Welle, T

    2013-01-01

    The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.

  6. [Deconvolution of overlapped peaks in total ion chromatogram of essential oil from citri reticulatae pericarpium viride by automated mass spectral deconvolution & identification system].

    PubMed

    Wang, Jian; Chen, Hong-Ping; Liu, You-Ping; Wei, Zheng; Liu, Rong; Fan, Dan-Qing

    2013-05-01

    This experiment shows how to use the automated mass spectral deconvolution & identification system (AMDIS) to deconvolve the overlapped peaks in the total ion chromatogram (TIC) of volatile oil from Chineses materia medica (CMM). The essential oil was obtained by steam distillation. Its TIC was gotten by GC-MS, and the superimposed peaks in TIC were deconvolved by AMDIS. First, AMDIS can detect the number of components in TIC through the run function. Then, by analyzing the extracted spectrum of corresponding scan point of detected component and the original spectrum of this scan point, and their counterparts' spectra in the referred MS Library, researchers can ascertain the component's structure accurately or deny some compounds, which don't exist in nature. Furthermore, through examining the changeability of characteristic fragment ion peaks of identified compounds, the previous outcome can be affirmed again. The result demonstrated that AMDIS could efficiently deconvolve the overlapped peaks in TIC by taking out the spectrum of matching scan point of discerned component, which led to exact identification of the component's structure.

  7. Hot spot detection, segmentation, and identification in PET images

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Meetz, Kirsten

    2006-03-01

    Positron Emission Tomography (PET) images provide functional or metabolic information from areas of high concentration of [18F]fluorodeoxyglucose (FDG) tracer, the "hot spots". These hot spots can be easily detected by the eye, but delineation and size determination required e.g. for diagnosis and staging of cancer is a tedious task that demands for automation. The approach for such an automated hot spot segmentation described in this paper comprises three steps: A region of interest detection by the watershed transform, a heart identification by an evaluation of scan lines, and the final segmentation of hot spot areas by a local threshold. The region of interest detection is the essential step, since it localizes the hot spot identification and the final segmentation. The heart identification is an example of how to differentiate between hot spots. Finally, we demonstrate the combination of PET and CT data. Our method is applicable to other techniques like SPECT.

  8. Trial Prospector: Matching Patients with Cancer Research Studies Using an Automated and Scalable Approach

    PubMed Central

    Sahoo, Satya S; Tao, Shiqiang; Parchman, Andrew; Luo, Zhihui; Cui, Licong; Mergler, Patrick; Lanese, Robert; Barnholtz-Sloan, Jill S; Meropol, Neal J; Zhang, Guo-Qiang

    2014-01-01

    Cancer is responsible for approximately 7.6 million deaths per year worldwide. A 2012 survey in the United Kingdom found dramatic improvement in survival rates for childhood cancer because of increased participation in clinical trials. Unfortunately, overall patient participation in cancer clinical studies is low. A key logistical barrier to patient and physician participation is the time required for identification of appropriate clinical trials for individual patients. We introduce the Trial Prospector tool that supports end-to-end management of cancer clinical trial recruitment workflow with (a) structured entry of trial eligibility criteria, (b) automated extraction of patient data from multiple sources, (c) a scalable matching algorithm, and (d) interactive user interface (UI) for physicians with both matching results and a detailed explanation of causes for ineligibility of available trials. We report the results from deployment of Trial Prospector at the National Cancer Institute (NCI)-designated Case Comprehensive Cancer Center (Case CCC) with 1,367 clinical trial eligibility evaluations performed with 100% accuracy. PMID:25506198

  9. Automated identification of protein-ligand interaction features using Inductive Logic Programming: a hexose binding case study.

    PubMed

    A Santos, Jose C; Nassif, Houssam; Page, David; Muggleton, Stephen H; E Sternberg, Michael J

    2012-07-11

    There is a need for automated methods to learn general features of the interactions of a ligand class with its diverse set of protein receptors. An appropriate machine learning approach is Inductive Logic Programming (ILP), which automatically generates comprehensible rules in addition to prediction. The development of ILP systems which can learn rules of the complexity required for studies on protein structure remains a challenge. In this work we use a new ILP system, ProGolem, and demonstrate its performance on learning features of hexose-protein interactions. The rules induced by ProGolem detect interactions mediated by aromatics and by planar-polar residues, in addition to less common features such as the aromatic sandwich. The rules also reveal a previously unreported dependency for residues cys and leu. They also specify interactions involving aromatic and hydrogen bonding residues. This paper shows that Inductive Logic Programming implemented in ProGolem can derive rules giving structural features of protein/ligand interactions. Several of these rules are consistent with descriptions in the literature. In addition to confirming literature results, ProGolem's model has a 10-fold cross-validated predictive accuracy that is superior, at the 95% confidence level, to another ILP system previously used to study protein/hexose interactions and is comparable with state-of-the-art statistical learners.

  10. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  11. Systems Operations Studies for Automated Guideway Transit Systems : System Availability Model Programmer's Manual

    DOT National Transportation Integrated Search

    1981-07-01

    In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...

  12. Distribution of different yeasts isolates among trauma patients and comparison of accuracy in identification of yeasts by automated method versus conventional methods for better use in low resource countries.

    PubMed

    Rajkumari, N; Mathur, P; Xess, I; Misra, M C

    2014-01-01

    As most trauma patients require long-term hospital stay and long-term antibiotic therapy, the risk of fungal infections in such patients is steadily increasing. Early diagnosis and rapid treatment is life saving in such critically ill trauma patients. To see the distribution of various species of Candida among trauma patients and compare the accuracy, rapid identification and cost effectiveness between VITEK 2, CHROMagar and conventional methods. Retrospective laboratory-based surveillance study performed over a period of 52 months (January 2009 to April 2013) at a level I trauma centre in New Delhi, India. All microbiological samples positive for Candida were processed for microbial identification using standard methods. Identification of Candida was done using chromogenic medium and by automated VITEK 2 Compact system and later confirmed using the conventional method. Time to identification in both was noted and accuracy compared with conventional method. Performed using the SPSS software for Windows (SPSS Inc. Chicago, IL, version 15.0). P values calculated using χ2 test for categorical variables. A P<0.05 was considered significant. Out of 445 yeasts isolates, Candida tropicalis (217, 49%) was the species that was maximally isolated. VITEK 2 was able to correctly identify 354 (79.5%) isolates but could not identify 48 (10.7%) isolates and wrongly identified or showed low discrimination in 43 (9.6%) isolates but CHROM agar correctly identified 381 (85.6%) isolates with 64 (14.4%) misidentification. Highest rate of misidentification was seen in C. tropicalis and C. glabrata (13, 27.1% each) by VITEK 2 and among C. albicans (9, 14%) by CHROMagar. Though CHROMagar gives identification at a lower cost compared with VITEK 2 and are more accurate, which is useful in low resource countries, its main drawback is the long duration taken for complete identification.

  13. An automated approach for annual layer counting in ice cores

    NASA Astrophysics Data System (ADS)

    Winstrup, M.; Svensson, A.; Rasmussen, S. O.; Winther, O.; Steig, E.; Axelrod, A.

    2012-04-01

    The temporal resolution of some ice cores is sufficient to preserve seasonal information in the ice core record. In such cases, annual layer counting represents one of the most accurate methods to produce a chronology for the core. Yet, manual layer counting is a tedious and sometimes ambiguous job. As reliable layer recognition becomes more difficult, a manual approach increasingly relies on human interpretation of the available data. Thus, much may be gained by an automated and therefore objective approach for annual layer identification in ice cores. We have developed a novel method for automated annual layer counting in ice cores, which relies on Bayesian statistics. It uses algorithms from the statistical framework of Hidden Markov Models (HMM), originally developed for use in machine speech recognition. The strength of this layer detection algorithm lies in the way it is able to imitate the manual procedures for annual layer counting, while being based on purely objective criteria for annual layer identification. With this methodology, it is possible to determine the most likely position of multiple layer boundaries in an entire section of ice core data at once. It provides a probabilistic uncertainty estimate of the resulting layer count, hence ensuring a proper treatment of ambiguous layer boundaries in the data. Furthermore multiple data series can be incorporated to be used at once, hence allowing for a full multi-parameter annual layer counting method similar to a manual approach. In this study, the automated layer counting algorithm has been applied to data from the NGRIP ice core, Greenland. The NGRIP ice core has very high temporal resolution with depth, and hence the potential to be dated by annual layer counting far back in time. In previous studies [Andersen et al., 2006; Svensson et al., 2008], manual layer counting has been carried out back to 60 kyr BP. A comparison between the counted annual layers based on the two approaches will be presented

  14. Comparison of Manual Versus Automated Data Collection Method for an Evidence-Based Nursing Practice Study

    PubMed Central

    Byrne, M.D.; Jordan, T.R.; Welle, T.

    2013-01-01

    Objective The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. Methods A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Results Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 “false negative” patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Conclusion Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare. PMID:23650488

  15. Automated retina identification based on multiscale elastic registration.

    PubMed

    Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D

    2016-12-01

    In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  17. Automatic poisson peak harvesting for high throughput protein identification.

    PubMed

    Breen, E J; Hopwood, F G; Williams, K L; Wilkins, M R

    2000-06-01

    High throughput identification of proteins by peptide mass fingerprinting requires an efficient means of picking peaks from mass spectra. Here, we report the development of a peak harvester to automatically pick monoisotopic peaks from spectra generated on matrix-assisted laser desorption/ionisation time of flight (MALDI-TOF) mass spectrometers. The peak harvester uses advanced mathematical morphology and watershed algorithms to first process spectra to stick representations. Subsequently, Poisson modelling is applied to determine which peak in an isotopically resolved group represents the monoisotopic mass of a peptide. We illustrate the features of the peak harvester with mass spectra of standard peptides, digests of gel-separated bovine serum albumin, and with Escherictia coli proteins prepared by two-dimensional polyacrylamide gel electrophoresis. In all cases, the peak harvester proved effective in its ability to pick similar monoisotopic peaks as an experienced human operator, and also proved effective in the identification of monoisotopic masses in cases where isotopic distributions of peptides were overlapping. The peak harvester can be operated in an interactive mode, or can be completely automated and linked through to peptide mass fingerprinting protein identification tools to achieve high throughput automated protein identification.

  18. Detection and identification of drugs and toxicants in human body fluids by liquid chromatography-tandem mass spectrometry under data-dependent acquisition control and automated database search.

    PubMed

    Oberacher, Herbert; Schubert, Birthe; Libiseller, Kathrin; Schweissgut, Anna

    2013-04-03

    Systematic toxicological analysis (STA) is aimed at detecting and identifying all substances of toxicological relevance (i.e. drugs, drugs of abuse, poisons and/or their metabolites) in biological material. Particularly, gas chromatography-mass spectrometry (GC/MS) represents a competent and commonly applied screening and confirmation tool. Herein, we present an untargeted liquid chromatography-tandem mass spectrometry (LC/MS/MS) assay aimed to complement existing GC/MS screening for the detection and identification of drugs in blood, plasma and urine samples. Solid-phase extraction was accomplished on mixed-mode cartridges. LC was based on gradient elution in a miniaturized C18 column. High resolution electrospray ionization-MS/MS in positive ion mode with data-dependent acquisition control was used to generate tandem mass spectral information that enabled compound identification via automated library search in the "Wiley Registry of Tandem Mass Spectral Data, MSforID". Fitness of the developed LC/MS/MS method for application in STA in terms of selectivity, detection capability and reliability of identification (sensitivity/specificity) was demonstrated with blank samples, certified reference materials, proficiency test samples, and authentic casework samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Automated Docking Screens: A Feasibility Study

    PubMed Central

    2009-01-01

    Molecular docking is the most practical approach to leverage protein structure for ligand discovery, but the technique retains important liabilities that make it challenging to deploy on a large scale. We have therefore created an expert system, DOCK Blaster, to investigate the feasibility of full automation. The method requires a PDB code, sometimes with a ligand structure, and from that alone can launch a full screen of large libraries. A critical feature is self-assessment, which estimates the anticipated reliability of the automated screening results using pose fidelity and enrichment. Against common benchmarks, DOCK Blaster recapitulates the crystal ligand pose within 2 Å rmsd 50−60% of the time; inferior to an expert, but respectrable. Half the time the ligand also ranked among the top 5% of 100 physically matched decoys chosen on the fly. Further tests were undertaken culminating in a study of 7755 eligible PDB structures. In 1398 cases, the redocked ligand ranked in the top 5% of 100 property-matched decoys while also posing within 2 Å rmsd, suggesting that unsupervised prospective docking is viable. DOCK Blaster is available at http://blaster.docking.org. PMID:19719084

  20. Automated docking screens: a feasibility study.

    PubMed

    Irwin, John J; Shoichet, Brian K; Mysinger, Michael M; Huang, Niu; Colizzi, Francesco; Wassam, Pascal; Cao, Yiqun

    2009-09-24

    Molecular docking is the most practical approach to leverage protein structure for ligand discovery, but the technique retains important liabilities that make it challenging to deploy on a large scale. We have therefore created an expert system, DOCK Blaster, to investigate the feasibility of full automation. The method requires a PDB code, sometimes with a ligand structure, and from that alone can launch a full screen of large libraries. A critical feature is self-assessment, which estimates the anticipated reliability of the automated screening results using pose fidelity and enrichment. Against common benchmarks, DOCK Blaster recapitulates the crystal ligand pose within 2 A rmsd 50-60% of the time; inferior to an expert, but respectrable. Half the time the ligand also ranked among the top 5% of 100 physically matched decoys chosen on the fly. Further tests were undertaken culminating in a study of 7755 eligible PDB structures. In 1398 cases, the redocked ligand ranked in the top 5% of 100 property-matched decoys while also posing within 2 A rmsd, suggesting that unsupervised prospective docking is viable. DOCK Blaster is available at http://blaster.docking.org .

  1. An automated method of quantifying ferrite microstructures using electron backscatter diffraction (EBSD) data.

    PubMed

    Shrestha, Sachin L; Breen, Andrew J; Trimby, Patrick; Proust, Gwénaëlle; Ringer, Simon P; Cairney, Julie M

    2014-02-01

    The identification and quantification of the different ferrite microconstituents in steels has long been a major challenge for metallurgists. Manual point counting from images obtained by optical and scanning electron microscopy (SEM) is commonly used for this purpose. While classification systems exist, the complexity of steel microstructures means that identifying and quantifying these phases is still a great challenge. Moreover, point counting is extremely tedious, time consuming, and subject to operator bias. This paper presents a new automated identification and quantification technique for the characterisation of complex ferrite microstructures by electron backscatter diffraction (EBSD). This technique takes advantage of the fact that different classes of ferrite exhibit preferential grain boundary misorientations, aspect ratios and mean misorientation, all of which can be detected using current EBSD software. These characteristics are set as criteria for identification and linked to grain size to determine the area fractions. The results of this method were evaluated by comparing the new automated technique with point counting results. The technique could easily be applied to a range of other steel microstructures. © 2013 Published by Elsevier B.V.

  2. Automated detection system of single nucleotide polymorphisms using two kinds of functional magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Liu, Hongna; Li, Song; Wang, Zhifei; Li, Zhiyang; Deng, Yan; Wang, Hua; Shi, Zhiyang; He, Nongyue

    2008-11-01

    Single nucleotide polymorphisms (SNPs) comprise the most abundant source of genetic variation in the human genome wide codominant SNPs identification. Therefore, large-scale codominant SNPs identification, especially for those associated with complex diseases, has induced the need for completely high-throughput and automated SNP genotyping method. Herein, we present an automated detection system of SNPs based on two kinds of functional magnetic nanoparticles (MNPs) and dual-color hybridization. The amido-modified MNPs (NH 2-MNPs) modified with APTES were used for DNA extraction from whole blood directly by electrostatic reaction, and followed by PCR, was successfully performed. Furthermore, biotinylated PCR products were captured on the streptavidin-coated MNPs (SA-MNPs) and interrogated by hybridization with a pair of dual-color probes to determine SNP, then the genotype of each sample can be simultaneously identified by scanning the microarray printed with the denatured fluorescent probes. This system provided a rapid, sensitive and highly versatile automated procedure that will greatly facilitate the analysis of different known SNPs in human genome.

  3. Natural language processing of clinical notes for identification of critical limb ischemia.

    PubMed

    Afzal, Naveed; Mallipeddi, Vishnu Priya; Sohn, Sunghwan; Liu, Hongfang; Chaudhry, Rajeev; Scott, Christopher G; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2018-03-01

    Critical limb ischemia (CLI) is a complication of advanced peripheral artery disease (PAD) with diagnosis based on the presence of clinical signs and symptoms. However, automated identification of cases from electronic health records (EHRs) is challenging due to absence of a single definitive International Classification of Diseases (ICD-9 or ICD-10) code for CLI. In this study, we extend a previously validated natural language processing (NLP) algorithm for PAD identification to develop and validate a subphenotyping NLP algorithm (CLI-NLP) for identification of CLI cases from clinical notes. We compared performance of the CLI-NLP algorithm with CLI-related ICD-9 billing codes. The gold standard for validation was human abstraction of clinical notes from EHRs. Compared to billing codes the CLI-NLP algorithm had higher positive predictive value (PPV) (CLI-NLP 96%, billing codes 67%, p < 0.001), specificity (CLI-NLP 98%, billing codes 74%, p < 0.001) and F1-score (CLI-NLP 90%, billing codes 76%, p < 0.001). The sensitivity of these two methods was similar (CLI-NLP 84%; billing codes 88%; p < 0.12). The CLI-NLP algorithm for identification of CLI from narrative clinical notes in an EHR had excellent PPV and has potential for translation to patient care as it will enable automated identification of CLI cases for quality projects, clinical decision support tools and support a learning healthcare system. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  4. The development of small-scale mechanization means positioning algorithm using radio frequency identification technology in industrial plants

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  5. Space station automation study: Automation requriements derived from space manufacturing concepts,volume 2

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe.

  6. An Automated, Experimenter-Free Method for the Standardised, Operant Cognitive Testing of Rats

    PubMed Central

    Rivalan, Marion; Munawar, Humaira; Fuchs, Anna; Winter, York

    2017-01-01

    Animal models of human pathology are essential for biomedical research. However, a recurring issue in the use of animal models is the poor reproducibility of behavioural and physiological findings within and between laboratories. The most critical factor influencing this issue remains the experimenter themselves. One solution is the use of procedures devoid of human intervention. We present a novel approach to experimenter-free testing cognitive abilities in rats, by combining undisturbed group housing with automated, standardized and individual operant testing. This experimenter-free system consisted of an automated-operant system (Bussey-Saksida rat touch screen) connected to a home cage containing group living rats via an automated animal sorter (PhenoSys). The automated animal sorter, which is based on radio-frequency identification (RFID) technology, functioned as a mechanical replacement of the experimenter. Rats learnt to regularly and individually enter the operant chamber and remained there for the duration of the experimental session only. Self-motivated rats acquired the complex touch screen task of trial-unique non-matching to location (TUNL) in half the time reported for animals that were manually placed into the operant chamber. Rat performance was similar between the two groups within our laboratory, and comparable to previously published results obtained elsewhere. This reproducibility, both within and between laboratories, confirms the validity of this approach. In addition, automation reduced daily experimental time by 80%, eliminated animal handling, and reduced equipment cost. This automated, experimenter-free setup is a promising tool of great potential for testing a large variety of functions with full automation in future studies. PMID:28060883

  7. Application of automation and information systems to forensic genetic specimen processing.

    PubMed

    Leclair, Benoît; Scholl, Tom

    2005-03-01

    During the last 10 years, the introduction of PCR-based DNA typing technologies in forensic applications has been highly successful. This technology has become pervasive throughout forensic laboratories and it continues to grow in prevalence. For many criminal cases, it provides the most probative evidence. Criminal genotype data banking and victim identification initiatives that follow mass-fatality incidents have benefited the most from the introduction of automation for sample processing and data analysis. Attributes of offender specimens including large numbers, high quality and identical collection and processing are ideal for the application of laboratory automation. The magnitude of kinship analysis required by mass-fatality incidents necessitates the application of computing solutions to automate the task. More recently, the development activities of many forensic laboratories are focused on leveraging experience from these two applications to casework sample processing. The trend toward increased prevalence of forensic genetic analysis will continue to drive additional innovations in high-throughput laboratory automation and information systems.

  8. Automated designation of tie-points for image-to-image coregistration.

    Treesearch

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  9. Automated inhaled nitric oxide alerts for adult extracorporeal membrane oxygenation patient identification.

    PubMed

    Belenkiy, Slava M; Batchinsky, Andriy I; Park, Timothy S; Luellen, David E; Serio-Melvin, Maria L; Cancio, Leopoldo C; Pamplin, Jeremy C; Chung, Kevin K; Salinas, Josè; Cannon, Jeremy W

    2014-09-01

    Recently, automated alerts have been used to identify patients with respiratory failure based on set criteria, which can be gleaned from the electronic medical record (EMR). Such an approach may also be useful for identifying patients with severe adult respiratory distress syndrome (ARDS) who may benefit from extracorporeal membrane oxygenation (ECMO). Inhaled nitric oxide (iNO) is a common rescue therapy for severe ARDS which can be easily tracked in the EMR, and some patients started on iNO may have indications for initiating ECMO. This case series summarizes our experience with using automated electronic alerts for ECMO team activation focused particularly on an alert triggered by the initiation of iNO. After a brief trial evaluation, our Smart Alert system generated an automated page and e-mail alert to ECMO team members whenever a nonzero value for iNO appeared in the respiratory care section of our EMR. If iNO was initiated for severe respiratory failure, a detailed evaluation by the ECMO team determined if ECMO was indicated. For those patients managed with ECMO, we tabulated baseline characteristics, indication for ECMO, and outcomes. From September 2012 to July 2013, 45 iNO alerts were generated on 42 unique patients. Six patients (14%) met criteria for ECMO. Of these, four were identified exclusively by the iNO alert. At the time of the alert, the median PaO₂-to-FIO₂ ratio was 64 mm Hg (range, 55-107 mm Hg), the median age-adjusted oxygenation index was 73 (range, 51-96), and the median Murray score was 3.4 (range, 3-3.75), indicating severe respiratory failure. Median time from iNO alert to ECMO initiation was 81 hours (range, -2-292 hours). Survival to hospital discharge was 83% in those managed with ECMO. Automated alerts may be useful for identifying patients with severe ARDS who may be ECMO candidates. Diagnostic test, level V.

  10. Automated identification of protein-ligand interaction features using Inductive Logic Programming: a hexose binding case study

    PubMed Central

    2012-01-01

    Background There is a need for automated methods to learn general features of the interactions of a ligand class with its diverse set of protein receptors. An appropriate machine learning approach is Inductive Logic Programming (ILP), which automatically generates comprehensible rules in addition to prediction. The development of ILP systems which can learn rules of the complexity required for studies on protein structure remains a challenge. In this work we use a new ILP system, ProGolem, and demonstrate its performance on learning features of hexose-protein interactions. Results The rules induced by ProGolem detect interactions mediated by aromatics and by planar-polar residues, in addition to less common features such as the aromatic sandwich. The rules also reveal a previously unreported dependency for residues cys and leu. They also specify interactions involving aromatic and hydrogen bonding residues. This paper shows that Inductive Logic Programming implemented in ProGolem can derive rules giving structural features of protein/ligand interactions. Several of these rules are consistent with descriptions in the literature. Conclusions In addition to confirming literature results, ProGolem’s model has a 10-fold cross-validated predictive accuracy that is superior, at the 95% confidence level, to another ILP system previously used to study protein/hexose interactions and is comparable with state-of-the-art statistical learners. PMID:22783946

  11. Robotics/Automated Systems Task Analysis and Description of Required Job Competencies Report. Task Analysis and Description of Required Job Competencies of Robotics/Automated Systems Technicians.

    ERIC Educational Resources Information Center

    Hull, Daniel M.; Lovett, James E.

    This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…

  12. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  13. Dangerous intersections? A review of studies of fatigue and distraction in the automated vehicle.

    PubMed

    Matthews, Gerald; Neubauer, Catherine; Saxby, Dyani J; Wohleber, Ryan W; Lin, Jinchao

    2018-04-10

    The impacts of fatigue on the vehicle driver may change with technological advancements including automation and the increasing prevalence of potentially distracting in-car systems. This article reviews the authors' simulation studies of how fatigue, automation, and distraction may intersect as threats to safety. Distinguishing between states of active and passive fatigue supports understanding of fatigue and the development of countermeasures. Active fatigue is a stress-like state driven by overload of cognitive capabilities. Passive fatigue is produced by underload and monotony, and is associated with loss of task engagement and alertness. Our studies show that automated driving reliably elicits subjective symptoms of passive fatigue and also loss of alertness that persists following manual takeover. Passive fatigue also impairs attention and automation use in operators of Remotely Piloted Vehicles (RPVs). Use of in-vehicle media has been proposed as a countermeasure to fatigue, but such media may also be distracting. Studies tested whether various forms of phone-based media interacted with automation-induced fatigue, but effects were complex and dependent on task configuration. Selection of fatigue countermeasures should be guided by an understanding of the form of fatigue confronting the operator. System design, regulation of level of automation, managing distraction, and selection of fatigue-resilient personnel are all possible interventions for passive fatigue, but careful evaluation of interventions is necessary prior to deployment. Copyright © 2018. Published by Elsevier Ltd.

  14. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  15. Extended System Operations Studies for Automated Guideway Transit Systems

    DOT National Transportation Integrated Search

    1982-02-01

    The objectives of the System Operations Studies (SOS) of the Automated Guideway Transit Technology (AGTT) program was to develop models for the analysis of system operations, to evaluate AGT system performance and cost, and to establish guidelines fo...

  16. Airport Surface Traffic Automation Study.

    DTIC Science & Technology

    1988-05-09

    the use of Artificial Intellignece * technology in enroute ATC can be applied directly to the surface control problem. 7.6 Development Approach The next...problems in airport surface control. If artificial intelligance provides useful results for airborne automation, the same techniques should prove useful

  17. Performance of Kiestra Total Laboratory Automation Combined with MS in Clinical Microbiology Practice

    PubMed Central

    Hodiamont, Caspar J.; de Jong, Menno D.; Overmeijer, Hendri P. J.; van den Boogaard, Mandy; Visser, Caroline E.

    2014-01-01

    Background Microbiological laboratories seek technologically innovative solutions to cope with large numbers of samples and limited personnel and financial resources. One platform that has recently become available is the Kiestra Total Laboratory Automation (TLA) system (BD Kiestra B.V., the Netherlands). This fully automated sample processing system, equipped with digital imaging technology, allows superior detection of microbial growth. Combining this approach with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MS) (Bruker Daltonik, Germany) is expected to enable more rapid identification of pathogens. Methods Early growth detection by digital imaging using Kiestra TLA combined with MS was compared to conventional methods (CM) of detection. Accuracy and time taken for microbial identification were evaluated for the two methods in 219 clinical blood culture isolates. The possible clinical impact of earlier microbial identification was assessed according to antibiotic treatment prescription. Results Pathogen identification using Kiestra TLA combined with MS resulted in a 30.6 hr time gain per isolate compared to CM. Pathogens were successfully identified in 98.4% (249/253) of all tested isolates. Early microbial identification without susceptibility testing led to an adjustment of antibiotic regimen in 12% (24/200) of patients. Conclusions The requisite 24 hr incubation time for microbial pathogens to reach sufficient growth for susceptibility testing and identification would be shortened by the implementation of Kiestra TLA in combination with MS, compared to the use of CM. Not only can this method optimize workflow and reduce costs, but it can allow potentially life-saving switches in antibiotic regimen to be initiated sooner. PMID:24624346

  18. Designing for Feel: Contrasts between Human and Automated Parametric Capture of Knob Physics.

    PubMed

    Swindells, C; MacLean, K E; Booth, K S

    2009-01-01

    We examine a crucial aspect of a tool intended to support designing for feel: the ability of an objective physical-model identification method to capture perceptually relevant parameters, relative to human identification performance. The feel of manual controls, such as knobs, sliders, and buttons, becomes critical when these controls are used in certain settings. Appropriate feel enables designers to create consistent control behaviors that lead to improved usability and safety. For example, a heavy knob with stiff detents for a power plant boiler setting may afford better feedback and safer operations, whereas subtle detents in an automobile radio volume knob may afford improved ergonomics and driver attention to the road. To assess the quality of our identification method, we compared previously reported automated model captures for five real mechanical reference knobs with captures by novice and expert human participants who were asked to adjust four parameters of a rendered knob model to match the feel of each reference knob. Participants indicated their satisfaction with the matches their renderings produced. We observed similar relative inertia, friction, detent strength, and detent spacing parameterizations by human experts and our automatic estimation methods. Qualitative results provided insight on users' strategies and confidence. While experts (but not novices) were better able to ascertain an underlying model in the presence of unmodeled dynamics, the objective algorithm outperformed all humans when an appropriate physical model was used. Our studies demonstrate that automated model identification can capture knob dynamics as perceived by a human, and they also establish limits to that ability; they comprise a step towards pragmatic design guidelines for embedded physical interfaces in which methodological expedience is informed by human perceptual requirements.

  19. Automated structural classification of lipids by machine learning.

    PubMed

    Taylor, Ryan; Miller, Ryan H; Miller, Ryan D; Porter, Michael; Dalgleish, James; Prince, John T

    2015-03-01

    Modern lipidomics is largely dependent upon structural ontologies because of the great diversity exhibited in the lipidome, but no automated lipid classification exists to facilitate this partitioning. The size of the putative lipidome far exceeds the number currently classified, despite a decade of work. Automated classification would benefit ongoing classification efforts by decreasing the time needed and increasing the accuracy of classification while providing classifications for mass spectral identification algorithms. We introduce a tool that automates classification into the LIPID MAPS ontology of known lipids with >95% accuracy and novel lipids with 63% accuracy. The classification is based upon simple chemical characteristics and modern machine learning algorithms. The decision trees produced are intelligible and can be used to clarify implicit assumptions about the current LIPID MAPS classification scheme. These characteristics and decision trees are made available to facilitate alternative implementations. We also discovered many hundreds of lipids that are currently misclassified in the LIPID MAPS database, strongly underscoring the need for automated classification. Source code and chemical characteristic lists as SMARTS search strings are available under an open-source license at https://www.github.com/princelab/lipid_classifier. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. An Automated Method to Identify Mesoscale Convective Complexes (MCCs) Implementing Graph Theory

    NASA Astrophysics Data System (ADS)

    Whitehall, K. D.; Mattmann, C. A.; Jenkins, G. S.; Waliser, D. E.; Rwebangira, R.; Demoz, B.; Kim, J.; Goodale, C. E.; Hart, A. F.; Ramirez, P.; Joyce, M. J.; Loikith, P.; Lee, H.; Khudikyan, S.; Boustani, M.; Goodman, A.; Zimdars, P. A.; Whittell, J.

    2013-12-01

    Mesoscale convective complexes (MCCs) are convectively-driven weather systems with a duration of ~10 - 12 hours and contributions of large amounts to the rainfall daily and monthly totals. More than 400 MCCs occur annually over various locations on the globe. In West Africa, ~170 MCCs occur annually during the 180 days representing the summer months (June - November), and contribute ~75% of the annual wet season rainfall. The main objective of this study is to improve automatic identification of MCC over West Africa. The spatial expanse of MCCs and the spatio-temporal variability in their convective characteristics make them difficult to characterize even in dense networks of radars and/or surface gauges. As such there exist criteria for identifying MCCs with satellite images - mostly using infrared (IR) data. Automated MCC identification methods are based on forward and/or backward in time spatial-temporal analysis of the IR satellite data and characteristically incorporate a manual component as these algorithms routinely falter with merging and splitting cloud systems between satellite images. However, these algorithms are not readily transferable to voluminous data or other satellite-derived datasets (e.g. TRMM), thus hindering comprehensive studies of these features both at weather and climate timescales. Recognizing the existing limitations of automated methods, this study explores the applicability of graph theory to creating a fully automated method for deriving a West African MCC dataset from hourly infrared satellite images between 2001- 2012. Graph theory, though not heavily implemented in the atmospheric sciences, has been used for the predicting (nowcasting) of thunderstorms from radar and satellite data by considering the relationship between atmospheric variables at a given time, or for the spatial-temporal analysis of cloud volumes. From these few studies, graph theory appears to be innately applicable to the complexity, non-linearity and inherent

  1. The influence of image setting on intracranial translucency measurement by manual and semi-automated system.

    PubMed

    Zhen, Li; Yang, Xin; Ting, Yuen Ha; Chen, Min; Leung, Tak Yeung

    2013-09-01

    To investigate the agreement between manual and semi-automated system and the effect of different image settings on intracranial translucency (IT) measurement. A prospective study was conducted on 55 women carrying singleton pregnancy who attended first trimester Down syndrome screening. IT was measured both manually and by semi-automated system at the same default image setting. The IT measurements were then repeated with the post-processing changes in the image setting one at a time. The difference in IT measurements between the altered and the original images were assessed. Intracranial translucency was successfully measured on 55 images both manually and by semi-automated method. There was strong agreement in IT measurements between the two methods with a mean difference (manual minus semi-automated) of 0.011 mm (95% confidence interval--0.052 mm-0.094 mm). There were statistically significant variations in both manual and semi-automated IT measurement after changing the Gain and the Contrast. The greatest changes occurred when the Contrast was reduced to 1 (IT reduced by 0.591 mm in semi-automated; 0.565 mm in manual), followed by when the Gain was increased to 15 (IT reduced by 0.424 mm in semi-automated; 0.524 mm in manual). The image settings may affect IT identification and measurement. Increased Gain and reduced Contrast are the most influential factors and may cause under-measurement of IT. © 2013 John Wiley & Sons, Ltd.

  2. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  3. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  4. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    PubMed Central

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384

  5. Novel Automated Morphometric and Kinematic Handwriting Assessment: A Validity Study in Children with ASD and ADHD

    ERIC Educational Resources Information Center

    Dirlikov, Benjamin; Younes, Laurent; Nebel, Mary Beth; Martinelli, Mary Katherine; Tiedemann, Alyssa Nicole; Koch, Carolyn A.; Fiorilli, Diana; Bastian, Amy J.; Denckla, Martha Bridge; Miller, Michael I.; Mostofsky, Stewart H.

    2017-01-01

    This study presents construct validity for a novel automated morphometric and kinematic handwriting assessment, including (1) convergent validity, establishing reliability of automated measures with traditional manual-derived Minnesota Handwriting Assessment (MHA), and (2) discriminant validity, establishing that the automated methods distinguish…

  6. Process development for automated solar cell and module production. Task 4: Automated array assembly

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.

  7. An Automated Method for Landmark Identification and Finite-Element Modeling of the Lumbar Spine.

    PubMed

    Campbell, Julius Quinn; Petrella, Anthony J

    2015-11-01

    The purpose of this study was to develop a method for the automated creation of finite-element models of the lumbar spine. Custom scripts were written to extract bone landmarks of lumbar vertebrae and assemble L1-L5 finite-element models. End-plate borders, ligament attachment points, and facet surfaces were identified. Landmarks were identified to maintain mesh correspondence between meshes for later use in statistical shape modeling. 90 lumbar vertebrae were processed creating 18 subject-specific finite-element models. Finite-element model surfaces and ligament attachment points were reproduced within 1e-5 mm of the bone surface, including the critical contact surfaces of the facets. Element quality exceeded specifications in 97% of elements for the 18 models created. The current method is capable of producing subject-specific finite-element models of the lumbar spine with good accuracy, quality, and robustness. The automated methods developed represent advancement in the state of the art of subject-specific lumbar spine modeling to a scale not possible with prior manual and semiautomated methods.

  8. System Operations Studies for Automated Guideway Transit Systems : Discrete Event Simulation Model Programmer's Manual

    DOT National Transportation Integrated Search

    1982-07-01

    In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...

  9. A framework for the automated data-driven constitutive characterization of composites

    Treesearch

    J.G. Michopoulos; John Hermanson; T. Furukawa; A. Iliopoulos

    2010-01-01

    We present advances on the development of a mechatronically and algorithmically automated framework for the data-driven identification of constitutive material models based on energy density considerations. These models can capture both the linear and nonlinear constitutive response of multiaxially loaded composite materials in a manner that accounts for progressive...

  10. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  11. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  12. Rapid and Easy Identification of Capsular Serotypes of Streptococcus pneumoniae by Use of Fragment Analysis by Automated Fluorescence-Based Capillary Electrophoresis

    PubMed Central

    Selva, Laura; del Amo, Eva; Brotons, Pedro

    2012-01-01

    The purpose of this study was to develop a high-throughput method for the identification of pneumococcal capsular types. Multiplex PCR combined with fragment analysis and automated fluorescent capillary electrophoresis (FAF-mPCR) was utilized. FAF-mPCR was composed of only 3 PCRs for the specific detection of serotypes 1, 2, 3, 4, 5, 6A/6B, 6C, 7F/7A, 7C/(7B/40), 8, 9V/9A, 9N/9L, 10A, 10F/(10C/33C), 11A/11D/11F, 12F/(12A/44/46), 13, 14, 15A/15F, 15B/15C, 16F, 17F, 18/(18A/18B/18C/18F), 19A, 19F, 20, 21, 22F/22A, 23A, 23B, 23F, 24/(24A/24B/24F), 31, 33F/(33A/37), 34, 35A/(35C/42), 35B, 35F/47F, 38/25F, and 39. In order to evaluate the assay, all invasive pneumococcal isolates (n = 394) characterized at Hospital Sant Joan de Déu, Barcelona, Spain, from July 2010 to July 2011 were included in this study. The Wallace coefficient was used to evaluate the overall agreement between two typing methods (Quellung reaction versus FAF-mPCR). A high concordance with Quellung was found: 97.2% (383/394) of samples. The Wallace coefficient was 0.981 (range, 0.965 to 0.997). Only 11 results were discordant with the Quellung reaction. However, latex reaction and Quellung results of the second reference laboratory agreed with FAF-mPCR for 9 of these 11 strains (82%). Therefore, we considered that only 2 of 394 strains (0.5%) were not properly characterized by the new assay. The automation of the process allowed the typing of 30 isolates in a few hours with a lower cost than that of the Quellung reaction. These results indicate that FAF-mPCR is a good method to determine the capsular serotype of Streptococcus pneumoniae. PMID:22875895

  13. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  14. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  15. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGES

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  16. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    PubMed

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  17. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals

    PubMed Central

    Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264

  18. Automation in Clinical Microbiology

    PubMed Central

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  19. Microbial identification system for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Brown, Harlan D.; Scarlett, Janie B.; Skweres, Joyce A.; Fortune, Russell L.; Staples, John L.; Pierson, Duane L.

    1989-01-01

    The Environmental Health System (EHS) and Health Maintenance Facility (HMF) on Space Station Freedom will require a comprehensive microbiology capability. This requirement entails the development of an automated system to perform microbial identifications on isolates from a variety of environmental and clinical sources and, when required, to perform antimicrobial sensitivity testing. The unit currently undergoing development and testing is the Automated Microbiology System II (AMS II) built by Vitek Systems, Inc. The AMS II has successfully completed 12 months of laboratory testing and evaluation for compatibility with microgravity operation. The AMS II is a promising technology for use on Space Station Freedom.

  20. An Astronomical Pattern-Matching Algorithm for Automated Identification of Whale Sharks

    NASA Technical Reports Server (NTRS)

    Arzoumanian, Z.; Holmberg, J.; Norman, B.

    2005-01-01

    The largest shark species alive today, whale sharks (Rhincodon typus) are rare and poorly studied. Directed fisheries, high value in international trade, a highly migratory nature, and generally low abundance make this species vulnerable to exploitation. Mark- and-recapture studies have provided our current understanding of whale shark demographics and life history, but conventional tagging has met with limited success. To aid in conservation and management efforts, and to further our knowledge of whale shark biology, an identification technology that maximizes the scientific value of individual sighting is needed.

  1. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    PubMed

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  2. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  3. Altering user' acceptance of automation through prior automation exposure.

    PubMed

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  4. Automated identification of OB associations in M31

    NASA Technical Reports Server (NTRS)

    Magnier, Eugene A.; Battinelli, Paolo; Lewin, Walter H. G.; Haiman, Zoltan; Paradijs, Jan Van; Hasinger, Guenther; Pietsch, Wolfgang; Supper, Rodrigo; Truemper, Joachim

    1993-01-01

    A new identification of OB associations in M31 has been performed using the Path Linkage Criterion (PLC) technique of Battinelli (1991). We found 174 associations with a very small contamination (less than 5%) by random clumps of stars. The expected total number and average size of OB associations in the region of M 31 covered by our data set (Magnier et al. 1992) are approximately 280 and approximately 90 pc, respectively. M31 associations therefore have sizes similar to those of OB associations observed in nearby galaxies, so that we can consider them to be classical OB associations. This list of OB associations will be used for the study of the spatial distribution of OB associations and their correlation with other objects. Taking into account the fact that we do not cover the entire disk of M31, we extrapolate a total number of association in M31 of approximately 420.

  5. Automation of metabolic stability studies in microsomes, cytosol and plasma using a 215 Gilson liquid handler.

    PubMed

    Linget, J M; du Vignaud, P

    1999-05-01

    A 215 Gilson liquid handler was used to automate enzymatic incubations using microsomes, cytosol and plasma. The design of automated protocols are described. They were based on the use of 96 deep well plates and on HPLC-based methods for assaying the substrate. The assessment of those protocols was made with comparison between manual and automated incubations, reliability and reproducibility of automated incubations in microsomes and cytosol. Examples of the use of those programs in metabolic studies in drug research, i.e. metabolic screening in microsomes and plasma were shown. Even rapid processes (with disappearance half lives as low as 1 min) can be analysed. This work demonstrates how stability studies can be automated to save time, render experiments involving human biological media less hazardous and may be improve inter-laboratory reproducibility.

  6. The microfluidic bioagent autonomous networked detector (M-BAND): an update. Fully integrated, automated, and networked field identification of airborne pathogens

    NASA Astrophysics Data System (ADS)

    Sanchez, M.; Probst, L.; Blazevic, E.; Nakao, B.; Northrup, M. A.

    2011-11-01

    We describe a fully automated and autonomous air-borne biothreat detection system for biosurveillance applications. The system, including the nucleic-acid-based detection assay, was designed, built and shipped by Microfluidic Systems Inc (MFSI), a new subsidiary of PositiveID Corporation (PSID). Our findings demonstrate that the system and assay unequivocally identify pathogenic strains of Bacillus anthracis, Yersinia pestis, Francisella tularensis, Burkholderia mallei, and Burkholderia pseudomallei. In order to assess the assay's ability to detect unknown samples, our team also challenged it against a series of blind samples provided by the Department of Homeland Security (DHS). These samples included natural occurring isolated strains, near-neighbor isolates, and environmental samples. Our results indicate that the multiplex assay was specific and produced no false positives when challenged with in house gDNA collections and DHS provided panels. Here we present another analytical tool for the rapid identification of nine Centers for Disease Control and Prevention category A and B biothreat organisms.

  7. Comparison of Bruker Biotyper Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometer to BD Phoenix Automated Microbiology System for Identification of Gram-Negative Bacilli▿

    PubMed Central

    Saffert, Ryan T.; Cunningham, Scott A.; Ihde, Sherry M.; Monson Jobe, Kristine E.; Mandrekar, Jayawant; Patel, Robin

    2011-01-01

    We compared the BD Phoenix automated microbiology system to the Bruker Biotyper (version 2.0) matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) mass spectrometry (MS) system for identification of Gram-negative bacilli, using biochemical testing and/or genetic sequencing to resolve discordant results. The BD Phoenix correctly identified 363 (83%) and 330 (75%) isolates to the genus and species level, respectively. The Bruker Biotyper correctly identified 408 (93%) and 360 (82%) isolates to the genus and species level, respectively. The 440 isolates were grouped into common (308) and infrequent (132) isolates in the clinical laboratory. For the 308 common isolates, the BD Phoenix and Bruker Biotyper correctly identified 294 (95%) and 296 (96%) of the isolates to the genus level, respectively. For species identification, the BD Phoenix and Bruker Biotyper correctly identified 93% of the common isolates (285 and 286, respectively). In contrast, for the 132 infrequent isolates, the Bruker Biotyper correctly identified 112 (85%) and 74 (56%) isolates to the genus and species level, respectively, compared to the BD Phoenix, which identified only 69 (52%) and 45 (34%) isolates to the genus and species level, respectively. Statistically, the Bruker Biotyper overall outperformed the BD Phoenix for identification of Gram-negative bacilli to the genus (P < 0.0001) and species (P = 0.0005) level in this sample set. When isolates were categorized as common or infrequent isolates, there was statistically no difference between the instruments for identification of common Gram-negative bacilli (P > 0.05). However, the Bruker Biotyper outperformed the BD Phoenix for identification of infrequently isolated Gram-negative bacilli (P < 0.0001). PMID:21209160

  8. Study of the impact of automation on productivity in bus-maintenance facilities. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sumanth, D.J.; Weiss, H.J.; Adya, B.

    1988-12-01

    Whether or not the various types of automation and new technologies introduced in a bus-transit system really have an impact on productivity is the question addressed in the study. The report describes a new procedure of productivity measurement and evaluation for a county-transit system and provides an objective perspective on the impact of automation on productivity in bus maintenance facilities. The research objectives were: to study the impact of automation on total productivity in transit maintenance facilities; to develop and apply a methodology for measuring the total productivity of a Floridian transit maintenance facility (Bradenton-Manatee County bus maintenance facility whichmore » has been introducing automation since 1983); and to develop a practical step-by-step implementation scheme for the total productivity-based productivity measurement system that any bus manager can use. All 3 objectives were successfully accomplished.« less

  9. The study design elements employed by researchers in preclinical animal experiments from two research domains and implications for automation of systematic reviews.

    PubMed

    O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B

    2018-01-01

    Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates

  10. A visual surveillance system for person re-identification

    NASA Astrophysics Data System (ADS)

    El-Alfy, Hazem; Muramatsu, Daigo; Teranishi, Yuuichi; Nishinaga, Nozomu; Makihara, Yasushi; Yagi, Yasushi

    2017-03-01

    We attempt the problem of autonomous surveillance for person re-identification. This is an active research area, where most recent work focuses on the open challenges of re-identification, independently of prerequisites of detection and tracking. In this paper, we are interested in designing a complete surveillance system, joining all the pieces of the puzzle together. We start by collecting our own dataset from multiple cameras. Then, we automate the process of detection and tracking of human subjects in the scenes, followed by performing the re-identification task. We evaluate the recognition performance of our system, report its strengths, discuss open challenges and suggest ways to address them.

  11. Automated multi-lesion detection for referable diabetic retinopathy in indigenous health care.

    PubMed

    Pires, Ramon; Carvalho, Tiago; Spurling, Geoffrey; Goldenstein, Siome; Wainer, Jacques; Luckie, Alan; Jelinek, Herbert F; Rocha, Anderson

    2015-01-01

    Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening.

  12. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  13. Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components

    PubMed Central

    Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah

    2017-01-01

    Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627

  14. Complacency and Automation Bias in the Use of Imperfect Automation.

    PubMed

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  15. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  16. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.

    PubMed

    Hergeth, Sebastian; Lorenz, Lutz; Vilimek, Roman; Krems, Josef F

    2016-05-01

    The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving. © 2016, Human Factors and Ergonomics Society.

  17. A tool for developing an automatic insect identification system based on wing outlines

    PubMed Central

    Yang, He-Ping; Ma, Chun-Sen; Wen, Hui; Zhan, Qing-Bin; Wang, Xin-Li

    2015-01-01

    For some insect groups, wing outline is an important character for species identification. We have constructed a program as the integral part of an automated system to identify insects based on wing outlines (DAIIS). This program includes two main functions: (1) outline digitization and Elliptic Fourier transformation and (2) classifier model training by pattern recognition of support vector machines and model validation. To demonstrate the utility of this program, a sample of 120 owlflies (Neuroptera: Ascalaphidae) was split into training and validation sets. After training, the sample was sorted into seven species using this tool. In five repeated experiments, the mean accuracy for identification of each species ranged from 90% to 98%. The accuracy increased to 99% when the samples were first divided into two groups based on features of their compound eyes. DAIIS can therefore be a useful tool for developing a system of automated insect identification. PMID:26251292

  18. Vertebra identification using template matching modelmp and K-means clustering.

    PubMed

    Larhmam, Mohamed Amine; Benjelloun, Mohammed; Mahmoudi, Saïd

    2014-03-01

    Accurate vertebra detection and segmentation are essential steps for automating the diagnosis of spinal disorders. This study is dedicated to vertebra alignment measurement, the first step in a computer-aided diagnosis tool for cervical spine trauma. Automated vertebral segment alignment determination is a challenging task due to low contrast imaging and noise. A software tool for segmenting vertebrae and detecting subluxations has clinical significance. A robust method was developed and tested for cervical vertebra identification and segmentation that extracts parameters used for vertebra alignment measurement. Our contribution involves a novel combination of a template matching method and an unsupervised clustering algorithm. In this method, we build a geometric vertebra mean model. To achieve vertebra detection, manual selection of the region of interest is performed initially on the input image. Subsequent preprocessing is done to enhance image contrast and detect edges. Candidate vertebra localization is then carried out by using a modified generalized Hough transform (GHT). Next, an adapted cost function is used to compute local voted centers and filter boundary data. Thereafter, a K-means clustering algorithm is applied to obtain clusters distribution corresponding to the targeted vertebrae. These clusters are combined with the vote parameters to detect vertebra centers. Rigid segmentation is then carried out by using GHT parameters. Finally, cervical spine curves are extracted to measure vertebra alignment. The proposed approach was successfully applied to a set of 66 high-resolution X-ray images. Robust detection was achieved in 97.5 % of the 330 tested cervical vertebrae. An automated vertebral identification method was developed and demonstrated to be robust to noise and occlusion. This work presents a first step toward an automated computer-aided diagnosis system for cervical spine trauma detection.

  19. Systems Operation Studies for Automated Guideway Transit Systems : Summary Report

    DOT National Transportation Integrated Search

    1980-02-01

    In order to examine specific Automated Guideway Transit (AGT) developments and concepts and to build a better knowledge base for future decision-making, UMTA has undertaken a new program of studies and technology investigations called the Urban Mass ...

  20. A Fully Automated Microfluidic Femtosecond Laser Axotomy Platform for Nerve Regeneration Studies in C. elegans

    PubMed Central

    Gokce, Sertan Kutal; Guo, Samuel X.; Ghorashian, Navid; Everett, W. Neil; Jarrell, Travis; Kottek, Aubri; Bovik, Alan C.; Ben-Yakar, Adela

    2014-01-01

    Femtosecond laser nanosurgery has been widely accepted as an axonal injury model, enabling nerve regeneration studies in the small model organism, Caenorhabditis elegans. To overcome the time limitations of manual worm handling techniques, automation and new immobilization technologies must be adopted to improve throughput in these studies. While new microfluidic immobilization techniques have been developed that promise to reduce the time required for axotomies, there is a need for automated procedures to minimize the required amount of human intervention and accelerate the axotomy processes crucial for high-throughput. Here, we report a fully automated microfluidic platform for performing laser axotomies of fluorescently tagged neurons in living Caenorhabditis elegans. The presented automation process reduces the time required to perform axotomies within individual worms to ∼17 s/worm, at least one order of magnitude faster than manual approaches. The full automation is achieved with a unique chip design and an operation sequence that is fully computer controlled and synchronized with efficient and accurate image processing algorithms. The microfluidic device includes a T-shaped architecture and three-dimensional microfluidic interconnects to serially transport, position, and immobilize worms. The image processing algorithms can identify and precisely position axons targeted for ablation. There were no statistically significant differences observed in reconnection probabilities between axotomies carried out with the automated system and those performed manually with anesthetics. The overall success rate of automated axotomies was 67.4±3.2% of the cases (236/350) at an average processing rate of 17.0±2.4 s. This fully automated platform establishes a promising methodology for prospective genome-wide screening of nerve regeneration in C. elegans in a truly high-throughput manner. PMID:25470130

  1. Automated Genotyping of a Highly Informative Panel of 40 Short Insertion-Deletion Polymorphisms Resolved in Polyacrylamide Gels for Forensic Identification and Kinship Analysis

    PubMed Central

    Pena, Heloisa B.; Pena, Sérgio D. J.

    2012-01-01

    Objective Short insertion-deletion polymorphisms (indels) are the second most abundant form of genetic variations in humans after SNPs. Since indel alleles differ in size, they can be typed using the same methodological approaches and equipment currently utilized for microsatellite genotyping, which is already operational in forensic laboratories. We have previously shown that a panel of 40 carefully chosen indels has excellent potential for forensic identification, with combined probability of identity (match probability) of 7.09 × 10–17 for Europeans. Methods We describe the successful development of a multiplex system for genotyping the 40-indel panel in long thin denaturing polyacrylamide gels with silver staining. We also demonstrate that the system can be easily fully automated with a simple large scanner and commercial software. Results and Conclusion The great advantage of the new system of typing is its very low cost. The total price for laboratory equipment is less than EUR 10,000.-, and genotyping of an individual patient will cost less than EUR 10.- in supplies. Thus, the 40-indel panel described here and the newly developed ‘low-tech’ analysis platform represent useful new tools for forensic identification and kinship analysis in laboratories with limited budgets, especially in developing countries. PMID:22851937

  2. [The study of medical supplies automation replenishment algorithm in hospital on medical supplies supplying chain].

    PubMed

    Sheng, Xi

    2012-07-01

    The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.

  3. Identification of Group G Streptococcal Isolates from Companion Animals in Japan and Their Antimicrobial Resistance Patterns.

    PubMed

    Tsuyuki, Yuzo; Kurita, Goro; Murata, Yoshiteru; Goto, Mieko; Takahashi, Takashi

    2017-07-24

    In this study, we conducted a species-level identification of group G streptococcal (GGS) isolates from companion animals in Japan and analyzed antimicrobial resistance (AMR) patterns. Strains were isolated from sterile and non-sterile specimens collected from 72 animals with clinical signs or symptoms in April-May, 2015. We identified the strain by 16S rRNA sequencing, mass spectrometry (MS), and an automated method based on their biochemical properties. Antimicrobial susceptibility was determined using the broth microdilution method and E-test. AMR determinants (erm(A), erm(B), mef(A), tet(M), tet(O), tet(K), tet(L), and tet(S)) in corresponding resistant isolates were amplified by PCR. The 16S rRNA sequencing identified the GGS species as Streptococcus canis (n = 68), Streptococcus dysgalactiae subsp. equisimilis (n = 3), and S. dysgalactiae subsp. dysgalactiae (n = 1). However, there were discrepancies between the sequencing data and both the MS and automated identification data. MS and the automated biochemical technique identified 18 and 37 of the 68 sequencing-identified S. canis strains, respectively. The AMR rates were 20.8% for tetracycline and 5.6% for clarithromycin, with minimum inhibitory concentrations (MIC) 50 -MIC 90 of 2-64 and ≤ 0.12-0.25μg/mL, respectively. AMR genotyping showed single or combined genotypes: erm(B) or tet(M)-tet(O)-tet(S). Our findings show the unique characteristics of GGS isolates from companion animals in Japan in terms of species-level identification and AMR patterns.

  4. Automation: Decision Aid or Decision Maker?

    NASA Technical Reports Server (NTRS)

    Skitka, Linda J.

    1998-01-01

    This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency. By comparing performance on exactly the same events on the same tasks with and without an automated decision aid, we were able to determine that at least the omission error part of automation bias is due to the unique context created by having an automated decision aid, and is not a phenomena that would occur even if people were not in an automated context. However, this study also revealed that having an automated decision aid did lead to modestly improved performance across all non-error events. Participants in the non- automated condition responded with 83.68% accuracy, whereas participants in the automated condition responded with 88.67% accuracy, across all events. Automated decision aids clearly led to better overall performance when they were accurate. People performed almost exactly at the level of reliability as the automation (which across events was 88% reliable). However, also clear, is that the presence of less than 100% accurate automated decision aids creates a context in which new kinds of errors in decision making can occur. Participants in the non-automated condition responded with 97% accuracy on the six "error" events, whereas participants in the automated condition had only a 65% accuracy rate when confronted with those same six events. In short, the presence of an AMA can lead to vigilance decrements that can lead to errors in decision making.

  5. Automated Dissolution for Enteric-Coated Aspirin Tablets: A Case Study for Method Transfer to a RoboDis II.

    PubMed

    Ibrahim, Sarah A; Martini, Luigi

    2014-08-01

    Dissolution method transfer is a complicated yet common process in the pharmaceutical industry. With increased pharmaceutical product manufacturing and dissolution acceptance requirements, dissolution testing has become one of the most labor-intensive quality control testing methods. There is an increased trend for automation in dissolution testing, particularly for large pharmaceutical companies to reduce variability and increase personnel efficiency. There is no official guideline for dissolution testing method transfer from a manual, semi-automated, to automated dissolution tester. In this study, a manual multipoint dissolution testing procedure for an enteric-coated aspirin tablet was transferred effectively and reproducibly to a fully automated dissolution testing device, RoboDis II. Enteric-coated aspirin samples were used as a model formulation to assess the feasibility and accuracy of media pH change during continuous automated dissolution testing. Several RoboDis II parameters were evaluated to ensure the integrity and equivalency of dissolution method transfer from a manual dissolution tester. This current study provides a systematic outline for the transfer of the manual dissolution testing protocol to an automated dissolution tester. This study further supports that automated dissolution testers compliant with regulatory requirements and similar to manual dissolution testers facilitate method transfer. © 2014 Society for Laboratory Automation and Screening.

  6. Automated Essay Scoring versus Human Scoring: A Correlational Study

    ERIC Educational Resources Information Center

    Wang, Jinhao; Brown, Michelle Stallone

    2008-01-01

    The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…

  7. Space station automation study. Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The two manufacturing concepts developed represent innovative, technologically advanced manufacturing schemes. The concepts were selected to facilitate an in depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, and artificial intelligence. While the cost effectiveness of these facilities has not been analyzed as part of this study, both appear entirely feasible for the year 2000 timeframe. The growing demand for high quality gallium arsenide microelectronics may warrant the ventures.

  8. Evaluation of the utility of a glycemic pattern identification system.

    PubMed

    Otto, Erik A; Tannan, Vinay

    2014-07-01

    With the increasing prevalence of systems allowing automated, real-time transmission of blood glucose data there is a need for pattern recognition techniques that can inform of deleterious patterns in glycemic control when people test. We evaluated the utility of pattern identification with a novel pattern identification system named Vigilant™ and compared it to standard pattern identification methods in diabetes. To characterize the importance of an identified pattern we evaluated the relative risk of future hypoglycemic and hyperglycemic events in diurnal periods following identification of a pattern in a data set of 536 patients with diabetes. We evaluated events 2 days, 7 days, 30 days, and 61-90 days from pattern identification, across diabetes types and cohorts of glycemic control, and also compared the system to 6 pattern identification methods consisting of deleterious event counts and percentages over 5-, 14-, and 30-day windows. Episodes of hypoglycemia, hyperglycemia, severe hypoglycemia, and severe hyperglycemia were 120%, 46%, 123%, and 76% more likely after pattern identification, respectively, compared to periods when no pattern was identified. The system was also significantly more predictive of deleterious events than other pattern identification methods evaluated, and was persistently predictive up to 3 months after pattern identification. The system identified patterns that are significantly predictive of deleterious glycemic events, and more so relative to many pattern identification methods used in diabetes management today. Further study will inform how improved pattern identification can lead to improved glycemic control. © 2014 Diabetes Technology Society.

  9. Automated identification of potential snow avalanche release areas based on digital elevation models

    NASA Astrophysics Data System (ADS)

    Bühler, Y.; Kumar, S.; Veitinger, J.; Christen, M.; Stoffel, A.; Snehmani

    2013-05-01

    The identification of snow avalanche release areas is a very difficult task. The release mechanism of snow avalanches depends on many different terrain, meteorological, snowpack and triggering parameters and their interactions, which are very difficult to assess. In many alpine regions such as the Indian Himalaya, nearly no information on avalanche release areas exists mainly due to the very rough and poorly accessible terrain, the vast size of the region and the lack of avalanche records. However avalanche release information is urgently required for numerical simulation of avalanche events to plan mitigation measures, for hazard mapping and to secure important roads. The Rohtang tunnel access road near Manali, Himachal Pradesh, India, is such an example. By far the most reliable way to identify avalanche release areas is using historic avalanche records and field investigations accomplished by avalanche experts in the formation zones. But both methods are not feasible for this area due to the rough terrain, its vast extent and lack of time. Therefore, we develop an operational, easy-to-use automated potential release area (PRA) detection tool in Python/ArcGIS which uses high spatial resolution digital elevation models (DEMs) and forest cover information derived from airborne remote sensing instruments as input. Such instruments can acquire spatially continuous data even over inaccessible terrain and cover large areas. We validate our tool using a database of historic avalanches acquired over 56 yr in the neighborhood of Davos, Switzerland, and apply this method for the avalanche tracks along the Rohtang tunnel access road. This tool, used by avalanche experts, delivers valuable input to identify focus areas for more-detailed investigations on avalanche release areas in remote regions such as the Indian Himalaya and is a precondition for large-scale avalanche hazard mapping.

  10. Assessment Study on Sensors and Automation in the Industries of the Future. Reports on Industrial Controls, Information Processing, Automation, and Robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Bonnie; Boddy, Mark; Doyle, Frank

    This report presents the results of an expert study to identify research opportunities for Sensors & Automation, a sub-program of the U.S. Department of Energy (DOE) Industrial Technologies Program (ITP). The research opportunities are prioritized by realizable energy savings. The study encompasses the technology areas of industrial controls, information processing, automation, and robotics. These areas have been central areas of focus of many Industries of the Future (IOF) technology roadmaps. This report identifies opportunities for energy savings as a direct result of advances in these areas and also recognizes indirect means of achieving energy savings, such as product quality improvement,more » productivity improvement, and reduction of recycle.« less

  11. Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation

    ERIC Educational Resources Information Center

    Grimes, Douglas; Warschauer, Mark

    2010-01-01

    Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…

  12. Improving automatic peptide mass fingerprint protein identification by combining many peak sets.

    PubMed

    Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim

    2004-08-05

    An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.

  13. Robust System for Automated Identification of Martian Impact Craters

    NASA Astrophysics Data System (ADS)

    Stepinski, T. F.; Mendenhall, M. P.

    2006-12-01

    Detailed analysis of the number and morphology of impact craters on Mars provides the worth of information about the geologic history of its surface. Global catalogs of Martian craters have been compiled (for example, the Barlow catalog) but they are not comprehensive, especially for small craters. Existing methods for machine detection of craters from images suffer from low efficiency and are not practical for global surveys. We have developed a robust two-stage system for an automated cataloging of craters from digital topography data (DEM). In the first stage an innovative crater-finding transform is performed on a DEM to identify centers of potential craters, their extents, and their basic characteristics. This stage produces a preliminary catalog. In the second stage a machine learning methods are employed to eliminate false positives. Using the MOLA derived DEMs with resolution of 1/128 degrees/pixel, we have applied our system to six ~ 106 km2 sites. The system has identified 3217 craters, 43% more than are present in the Barlow catalog. The extra finds are predominantly small craters that are most difficult to account for in manual surveys. Because our automated survey is DEM-based, the resulting catalog lists craters' depths in addition to their positions, sizes, and measures of shape. This feature significantly increases the scientific utility of any catalog generated using our system. Our initial calculations yield a training set that will be used to identify craters over the entire Martian surface with estimated accuracy of 95%. Moreover, because our method is pixel-based and scale- independent, the present training set may be used to identify craters in higher resolution DEMs derived from Mars Express HRSC images. It also can be applied to future topography data from Mars and other planets. For example, it may be utilized to catalog craters on Mercury and the Moon using altimetry data to be gathered by Messenger and Lunar Reconnaissance Orbiter

  14. Cockpit Adaptive Automation and Pilot Performance

    NASA Technical Reports Server (NTRS)

    Parasuraman, Raja

    2001-01-01

    The introduction of high-level automated systems in the aircraft cockpit has provided several benefits, e.g., new capabilities, enhanced operational efficiency, and reduced crew workload. At the same time, conventional 'static' automation has sometimes degraded human operator monitoring performance, increased workload, and reduced situation awareness. Adaptive automation represents an alternative to static automation. In this approach, task allocation between human operators and computer systems is flexible and context-dependent rather than static. Adaptive automation, or adaptive task allocation, is thought to provide for regulation of operator workload and performance, while preserving the benefits of static automation. In previous research we have reported beneficial effects of adaptive automation on the performance of both pilots and non-pilots of flight-related tasks. For adaptive systems to be viable, however, such benefits need to be examined jointly in the context of a single set of tasks. The studies carried out under this project evaluated a systematic method for combining different forms of adaptive automation. A model for effective combination of different forms of adaptive automation, based on matching adaptation to operator workload was proposed and tested. The model was evaluated in studies using IFR-rated pilots flying a general-aviation simulator. Performance, subjective, and physiological (heart rate variability, eye scan-paths) measures of workload were recorded. The studies compared workload-based adaptation to to non-adaptive control conditions and found evidence for systematic benefits of adaptive automation. The research provides an empirical basis for evaluating the effectiveness of adaptive automation in the cockpit. The results contribute to the development of design principles and guidelines for the implementation of adaptive automation in the cockpit, particularly in general aviation, and in other human-machine systems. Project goals

  15. A Fully Automated High-Throughput Flow Cytometry Screening System Enabling Phenotypic Drug Discovery.

    PubMed

    Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott

    2018-05-01

    The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.

  16. Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery

    NASA Astrophysics Data System (ADS)

    Kit, Oleksandr; Lüdeke, Matthias

    2013-09-01

    This paper presents an approach to automated identification of slum area change patterns in Hyderabad, India, using multi-year and multi-sensor very high resolution satellite imagery. It relies upon a lacunarity-based slum detection algorithm, combined with Canny- and LSD-based imagery pre-processing routines. This method outputs plausible and spatially explicit slum locations for the whole urban agglomeration of Hyderabad in years 2003 and 2010. The results indicate a considerable growth of area occupied by slums between these years and allow identification of trends in slum development in this urban agglomeration.

  17. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.

    PubMed

    Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A

    2013-12-01

    Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.

  18. Assessment selection in human-automation interaction studies: The Failure-GAM2E and review of assessment methods for highly automated driving.

    PubMed

    Grane, Camilla

    2018-01-01

    Highly automated driving will change driver's behavioural patterns. Traditional methods used for assessing manual driving will only be applicable for the parts of human-automation interaction where the driver intervenes such as in hand-over and take-over situations. Therefore, driver behaviour assessment will need to adapt to the new driving scenarios. This paper aims at simplifying the process of selecting appropriate assessment methods. Thirty-five papers were reviewed to examine potential and relevant methods. The review showed that many studies still relies on traditional driving assessment methods. A new method, the Failure-GAM 2 E model, with purpose to aid assessment selection when planning a study, is proposed and exemplified in the paper. Failure-GAM 2 E includes a systematic step-by-step procedure defining the situation, failures (Failure), goals (G), actions (A), subjective methods (M), objective methods (M) and equipment (E). The use of Failure-GAM 2 E in a study example resulted in a well-reasoned assessment plan, a new way of measuring trust through feet movements and a proposed Optimal Risk Management Model. Failure-GAM 2 E and the Optimal Risk Management Model are believed to support the planning process for research studies in the field of human-automation interaction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An automated high throughput screening-compatible assay to identify regulators of stem cell neural differentiation.

    PubMed

    Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario

    2012-03-01

    The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.

  20. Identifying Speech Acts in E-Mails: Toward Automated Scoring of the "TOEIC"® E-Mail Task. Research Report. ETS RR-12-16

    ERIC Educational Resources Information Center

    De Felice, Rachele; Deane, Paul

    2012-01-01

    This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…

  1. Measurement of gamma' precipitates in a nickel-based superalloy using energy-filtered transmission electron microscopy coupled with automated segmenting techniques.

    PubMed

    Tiley, J S; Viswanathan, G B; Shiveley, A; Tschopp, M; Srinivasan, R; Banerjee, R; Fraser, H L

    2010-08-01

    Precipitates of the ordered L1(2) gamma' phase (dispersed in the face-centered cubic or FCC gamma matrix) were imaged in Rene 88 DT, a commercial multicomponent Ni-based superalloy, using energy-filtered transmission electron microscopy (EFTEM). Imaging was performed using the Cr, Co, Ni, Ti and Al elemental L-absorption edges in the energy loss spectrum. Manual and automated segmentation procedures were utilized for identification of precipitate boundaries and measurement of precipitate sizes. The automated region growing technique for precipitate identification in images was determined to measure accurately precipitate diameters. In addition, the region growing technique provided a repeatable method for optimizing segmentation techniques for varying EFTEM conditions. (c) 2010 Elsevier Ltd. All rights reserved.

  2. Automated extraction of family history information from clinical notes.

    PubMed

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S; Winden, Tamara J; Carter, Elizabeth W; Melton, Genevieve B

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication ("indicator phrases"), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications.

  3. Automated Multi-Lesion Detection for Referable Diabetic Retinopathy in Indigenous Health Care

    PubMed Central

    Pires, Ramon; Carvalho, Tiago; Spurling, Geoffrey; Goldenstein, Siome; Wainer, Jacques; Luckie, Alan; Jelinek, Herbert F.; Rocha, Anderson

    2015-01-01

    Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening. PMID:26035836

  4. Automated classification of radiology reports to facilitate retrospective study in radiology.

    PubMed

    Zhou, Yihua; Amundson, Per K; Yu, Fang; Kessler, Marcus M; Benzinger, Tammie L S; Wippold, Franz J

    2014-12-01

    Retrospective research is an import tool in radiology. Identifying imaging examinations appropriate for a given research question from the unstructured radiology reports is extremely useful, but labor-intensive. Using the machine learning text-mining methods implemented in LingPipe [1], we evaluated the performance of the dynamic language model (DLM) and the Naïve Bayesian (NB) classifiers in classifying radiology reports to facilitate identification of radiological examinations for research projects. The training dataset consisted of 14,325 sentences from 11,432 radiology reports randomly selected from a database of 5,104,594 reports in all disciplines of radiology. The training sentences were categorized manually into six categories (Positive, Differential, Post Treatment, Negative, Normal, and History). A 10-fold cross-validation [2] was used to evaluate the performance of the models, which were tested in classification of radiology reports for cases of sellar or suprasellar masses and colloid cysts. The average accuracies for the DLM and NB classifiers were 88.5% with 95% confidence interval (CI) of 1.9% and 85.9% with 95% CI of 2.0%, respectively. The DLM performed slightly better and was used to classify 1,397 radiology reports containing the keywords "sellar or suprasellar mass", or "colloid cyst". The DLM model produced an accuracy of 88.2% with 95% CI of 2.1% for 959 reports that contain "sellar or suprasellar mass" and an accuracy of 86.3% with 95% CI of 2.5% for 437 reports of "colloid cyst". We conclude that automated classification of radiology reports using machine learning techniques can effectively facilitate the identification of cases suitable for retrospective research.

  5. Automated Microflow NMR: Routine Analysis of Five-Microliter Samples

    PubMed Central

    Jansma, Ariane; Chuan, Tiffany; Geierstanger, Bernhard H.; Albrecht, Robert W.; Olson, Dean L.; Peck, Timothy L.

    2006-01-01

    A microflow CapNMR probe double-tuned for 1H and 13C was installed on a 400-MHz NMR spectrometer and interfaced to an automated liquid handler. Individual samples dissolved in DMSO-d6 are submitted for NMR analysis in vials containing as little as 10 μL of sample. Sets of samples are submitted in a low-volume 384-well plate. Of the 10 μL of sample per well, as with vials, 5 μL is injected into the microflow NMR probe for analysis. For quality control of chemical libraries, 1D NMR spectra are acquired under full automation from 384-well plates on as many as 130 compounds within 24 h using 128 scans per spectrum and a sample-to-sample cycle time of ∼11 min. Because of the low volume requirements and high mass sensitivity of the microflow NMR system, 30 nmol of a typical small molecule is sufficient to obtain high-quality, well-resolved, 1D proton or 2D COSY NMR spectra in ∼6 or 20 min of data acquisition time per experiment, respectively. Implementation of pulse programs with automated solvent peak identification and suppression allow for reliable data collection, even for samples submitted in fully protonated DMSO. The automated microflow NMR system is controlled and monitored using web-based software. PMID:16194121

  6. Design and operational energy studies in a new high-rise office building. Volume 4. Building automation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    The objectives of the analysis are to evaluate the application of a number of building automation system capabilities using the Park Plaza Building as a case study. The study looks at the energy and cost effectiveness of some energy management strategies of the building automation system as well as some energy management strategies that are not currently a part of the building automation system. The strategies are also evaluated in terms of their reliability and usefulness in this building.

  7. Cockpit automation

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1988-01-01

    The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.

  8. Automated coronal hole identification via multi-thermal intensity segmentation

    NASA Astrophysics Data System (ADS)

    Garton, Tadhg M.; Gallagher, Peter T.; Murray, Sophie A.

    2018-01-01

    Coronal holes (CH) are regions of open magnetic fields that appear as dark areas in the solar corona due to their low density and temperature compared to the surrounding quiet corona. To date, accurate identification and segmentation of CHs has been a difficult task due to their comparable intensity to local quiet Sun regions. Current segmentation methods typically rely on the use of single Extreme Ultra-Violet passband and magnetogram images to extract CH information. Here, the coronal hole identification via multi-thermal emission recognition algorithm (CHIMERA) is described, which analyses multi-thermal images from the atmospheric image assembly (AIA) onboard the solar dynamics observatory (SDO) to segment coronal hole boundaries by their intensity ratio across three passbands (171 Å, 193 Å, and 211 Å). The algorithm allows accurate extraction of CH boundaries and many of their properties, such as area, position, latitudinal and longitudinal width, and magnetic polarity of segmented CHs. From these properties, a clear linear relationship was identified between the duration of geomagnetic storms and coronal hole areas. CHIMERA can therefore form the basis of more accurate forecasting of the start and duration of geomagnetic storms.

  9. Automated isotope identification algorithm using artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair

    There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less

  10. Automated isotope identification algorithm using artificial neural networks

    DOE PAGES

    Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair

    2017-04-12

    There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less

  11. Automation study for space station subsystems and mission ground support

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An automation concept for the autonomous operation of space station subsystems, i.e., electric power, thermal control, and communications and tracking are discussed. To assure that functions essential for autonomous operations are not neglected, an operations function (systems monitoring and control) is included in the discussion. It is recommended that automated speech recognition and synthesis be considered a basic mode of man/machine interaction for space station command and control, and that the data management system (DMS) and other systems on the space station be designed to accommodate fully automated fault detection, isolation, and recovery within the system monitoring function of the DMS.

  12. Use of immunochromatographic assay for rapid identification of Mycobacterium tuberculosis complex from liquid culture

    PubMed Central

    Považan, Anika; Vukelić, Anka; Savković, Tijana; Kurucin, Tatjana

    2012-01-01

    A new, simple immunochromatographic assay for rapid identification of Mycobacterium tuberculosis complex in liquid cultures has been developed. The principle of the assay is binding of the Mycobacterium tuberculosis complex specific antigen to the monoclonal antibody conjugated on the test strip. The aim of this study is evaluation of the performance of immunochromatographic assay in identification of Mycobacterium tuberculosis complex in primary positive liquid cultures of BacT/Alert automated system. A total of 159 primary positive liquid cultures were tested using the immunochromatographic assay (BD MGIT TBc ID) and the conventional subculture, followed by identification using biochemical tests. Of 159 positive liquid cultures, using the conventional method, Mycobacterium tuberculos is was identified in 119 (74.8%), nontuberculous mycobacteria were found in 4 (2.5%), 14 (8.8%) cultures were contaminated and 22 (13.8%) cultures were found to be negative. Using the immunochromatographic assay, Mycobacterium tuberculosis complex was detected in 118 (74.2%) liquid cultures, and 41 (25.8%) tests were negative. Sensitivity, specificity, positive and negative predictive values of the test were 98.3%; 97.5%; 99.15%; 95.12%, respectively. The value of kappa test was 0.950, and McNemar test was 1.00. The immunochromatographic assay is a simple and rapid test which represents a suitable alternative to the conventional subculture method for the primary identification of Mycobacterium tuberculosis complex in liquid cultures of BacT/Alert automated system. PMID:22364301

  13. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    PubMed

    Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang

    2015-04-01

    Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision

  14. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  15. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department

    PubMed Central

    Ni, Yizhao; Kennebeck, Stephanie; Dexheimer, Judith W; McAneney, Constance M; Tang, Huaxiu; Lingren, Todd; Li, Qi; Zhai, Haijun; Solti, Imre

    2015-01-01

    Objectives (1) To develop an automated eligibility screening (ES) approach for clinical trials in an urban tertiary care pediatric emergency department (ED); (2) to assess the effectiveness of natural language processing (NLP), information extraction (IE), and machine learning (ML) techniques on real-world clinical data and trials. Data and methods We collected eligibility criteria for 13 randomly selected, disease-specific clinical trials actively enrolling patients between January 1, 2010 and August 31, 2012. In parallel, we retrospectively selected data fields including demographics, laboratory data, and clinical notes from the electronic health record (EHR) to represent profiles of all 202795 patients visiting the ED during the same period. Leveraging NLP, IE, and ML technologies, the automated ES algorithms identified patients whose profiles matched the trial criteria to reduce the pool of candidates for staff screening. The performance was validated on both a physician-generated gold standard of trial–patient matches and a reference standard of historical trial–patient enrollment decisions, where workload, mean average precision (MAP), and recall were assessed. Results Compared with the case without automation, the workload with automated ES was reduced by 92% on the gold standard set, with a MAP of 62.9%. The automated ES achieved a 450% increase in trial screening efficiency. The findings on the gold standard set were confirmed by large-scale evaluation on the reference set of trial–patient matches. Discussion and conclusion By exploiting the text of trial criteria and the content of EHRs, we demonstrated that NLP-, IE-, and ML-based automated ES could successfully identify patients for clinical trials. PMID:25030032

  16. Charge Identification of Highly Ionizing Particles in Desensitized Nuclear Emulsion Using High Speed Read-Out System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toshito, T.; Kodama, K.; Yusa, K.

    2006-05-10

    We performed an experimental study of charge identification of heavy ions from helium to carbon having energy of about 290 MeV/u using an emulsion chamber. Emulsion was desensitized by means of forced fading (refreshing) to expand a dynamic range of response to highly charged particles. For the track reconstruction and charge identification, the fully automated high speed emulsion read-out system, which was originally developed for identifying minimum ionizing particles, was used without any modification. Clear track by track charge identification up to Z=6 was demonstrated. The refreshing technique has proved to be a powerful technique to expand response of emulsionmore » film to highly ionizing particles.« less

  17. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  18. Improvement and automation of a real-time PCR assay for vaginal fluids.

    PubMed

    De Vittori, E; Giampaoli, S; Barni, F; Baldi, M; Berti, A; Ripani, L; Romano Spica, V

    2016-05-01

    The identification of vaginal fluids is crucial in forensic science. Several molecular protocols based on PCR amplification of mfDNA (microflora DNA) specific for vaginal bacteria are now available. Unfortunately mfDNA extraction and PCR reactions require manual optimization of several steps. The aim of present study was the verification of a partial automatization of vaginal fluids identification through two instruments widely diffused in forensic laboratories: EZ1 Advanced robot and Rotor Gene Q 5Plex HRM. Moreover, taking advantage of 5-plex thermocycler technology, the ForFluid kit performances were improved by expanding the mfDNA characterization panel with a new bacterial target for vaginal fluids and with an internal positive control (IPC) to monitor PCR inhibition. Results underlined the feasibility of a semi-automated extraction of mfDNA using a BioRobot and demonstrated the analytical improvements of the kit. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Completely automated modal analysis procedure based on the combination of different OMA methods

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Bussini, Alberto; Resta, Ferruccio

    2018-03-01

    In this work a completely automated output-only Modal Analysis procedure is presented and all its benefits are listed. Based on the merging of different Operational Modal Analysis methods and a statistical approach, the identification process has been improved becoming more robust and giving as results only the real natural frequencies, damping ratios and mode shapes of the system. The effect of the temperature can be taken into account as well, leading to the creation of a better tool for automated Structural Health Monitoring. The algorithm has been developed and tested on a numerical model of a scaled three-story steel building present in the laboratories of Politecnico di Milano.

  20. Developing the Interstate Identification Index/Federal Bureau of Investigation (III/FBI) system for providing timely criminal and civil identification and criminal history information to the nation's law enforcement agencies

    NASA Astrophysics Data System (ADS)

    Copeland, Patricia L.; Shugars, James

    1997-02-01

    The Federal Bureau of Investigation (FBI) is currently developing a new system to provide timely criminal and civil identities and criminal history information to the nation's local, state, and federal users. The Integrated Automated Fingerprint Identification System (IAFIS), an upgrade to the existing Identification Division Automated Services (IDAS) System, is scheduled for implementation in 1999 at the new FBI facility in Clarksburg, West Virginia. IAFIS will offer new capabilities for electronic transmittal of fingerprint cards to the FBI, an improved fingerprint matching algorithm, and electronic maintenance of fingerprints and photo images. The Interstate Identification Index (III/FBI) System is one of three segments comprising the umbrella IAFIS System. III/FBI provides repository, maintenance, and dissemination capabilities for the 40 million subject national criminal history database. III/FBI will perform over 1 million name searches each day. Demanding performance, reliability/maintainability/availability, and flexibility/expandability requirements make III/FBI an architectural challenge to the system developers. This paper will discuss these driving requirements and present the technical solutions in terms of leading edge hardware and software.

  1. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology

  2. Manual versus automated coding of free-text self-reported medication data in the 45 and Up Study: a validation study.

    PubMed

    Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily

    2015-03-30

    Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.

  3. Study of Automated Module Fabrication for Lightweight Solar Blanket Utilization

    NASA Technical Reports Server (NTRS)

    Gibson, C. E.

    1979-01-01

    Cost-effective automated techniques for accomplishing the titled purpose; based on existing in-house capability are described. As a measure of the considered automation, the production of a 50 kilowatt solar array blanket, exclusive of support and deployment structure, within an eight-month fabrication period was used. Solar cells considered for this blanket were 2 x 4 x .02 cm wrap-around cells, 2 x 2 x .005 cm and 3 x 3 x .005 cm standard bar contact thin cells, all welded contacts. Existing fabrication processes are described, the rationale for each process is discussed, and the capability for further automation is discussed.

  4. Pilots' monitoring strategies and performance on automated flight decks: an empirical study combining behavioral and eye-tracking data.

    PubMed

    Sarter, Nadine B; Mumaw, Randall J; Wickens, Christopher D

    2007-06-01

    The objective of the study was to examine pilots' automation monitoring strategies and performance on highly automated commercial flight decks. A considerable body of research and operational experience has documented breakdowns in pilot-automation coordination on modern flight decks. These breakdowns are often considered symptoms of monitoring failures even though, to date, only limited and mostly anecdotal data exist concerning pilots' monitoring strategies and performance. Twenty experienced B-747-400 airline pilots flew a 1-hr scenario involving challenging automation-related events on a full-mission simulator. Behavioral, mental model, and eye-tracking data were collected. The findings from this study confirm that pilots monitor basic flight parameters to a much greater extent than visual indications of the automation configuration. More specifically, they frequently fail to verify manual mode selections or notice automatic mode changes. In other cases, they do not process mode annunciations in sufficient depth to understand their implications for aircraft behavior. Low system observability and gaps in pilots' understanding of complex automation modes were shown to contribute to these problems. Our findings describe and explain shortcomings in pilot's automation monitoring strategies and performance based on converging behavioral, eye-tracking, and mental model data. They confirm that monitoring failures are one major contributor to breakdowns in pilot-automation interaction. The findings from this research can inform the design of improved training programs and automation interfaces that support more effective system monitoring.

  5. Current status and future prospects of an automated sample exchange system PAM for protein crystallography

    NASA Astrophysics Data System (ADS)

    Hiraki, M.; Yamada, Y.; Chavas, L. M. G.; Matsugaki, N.; Igarashi, N.; Wakatsuki, S.

    2013-03-01

    To achieve fully-automated and/or remote data collection in high-throughput X-ray experiments, the Structural Biology Research Centre at the Photon Factory (PF) has installed PF automated mounting system (PAM) for sample exchange robots at PF macromolecular crystallography beamlines BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. We are upgrading the experimental systems, including the PAM for stable and efficient operation. To prevent human error in automated data collection, we installed a two-dimensional barcode reader for identification of the cassettes and sample pins. Because no liquid nitrogen pipeline in the PF experimental hutch is installed, the users commonly add liquid nitrogen using a small Dewar. To address this issue, an automated liquid nitrogen filling system that links a 100-liter tank to the robot Dewar has been installed on the PF macromolecular beamline. Here we describe this new implementation, as well as future prospects.

  6. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  7. Systems Study of an Automated Fire Weather Data System

    NASA Technical Reports Server (NTRS)

    Nishioka, K.

    1974-01-01

    A sensor system applicable to an automated weather station was developed. The sensor provides automated fire weather data which correlates with manual readings. The equipment and methods are applied as an aid to the surveillance and protection of wildlands from fire damage. The continuous readings provided by the sensor system make it possible to determine the periods of time that the wilderness areas should be closed to the public to minimize the possibilities of fire.

  8. Automated detection of the retinal from OCT spectral domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-06-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  9. Automated detection of retinal layers from OCT spectral-domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-12-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  10. Endoscope reprocessing methods: a prospective study on the impact of human factors and automation.

    PubMed

    Ofstead, Cori L; Wetzler, Harry P; Snyder, Alycea K; Horton, Rebecca A

    2010-01-01

    The main cause of endoscopy-associated infections is failure to adhere to reprocessing guidelines. More information about factors impacting compliance is needed to support the development of effective interventions. The purpose of this multisite, observational study was to evaluate reprocessing practices, employee perceptions, and occupational health issues. Data were collected utilizing interviews, surveys, and direct observation. Written reprocessing policies and procedures were in place at all five sites, and employees affirmed the importance of most recommended steps. Nevertheless, observers documented guideline adherence, with only 1.4% of endoscopes reprocessed using manual cleaning methods with automated high-level disinfection versus 75.4% of those reprocessed using an automated endoscope cleaner and reprocessor. The majority reported health problems (i.e., pain, decreased flexibility, numbness, or tingling). Physical discomfort was associated with time spent reprocessing (p = .041). Discomfort diminished after installation of automated endoscope cleaners and reprocessors (p = .001). Enhanced training and accountability, combined with increased automation, may ensure guideline adherence and patient safety while improving employee satisfaction and health.

  11. OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS

    PubMed Central

    2011-01-01

    Background Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic

  12. High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.

    2016-09-23

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less

  13. NMRNet: A deep learning approach to automated peak picking of protein NMR spectra.

    PubMed

    Klukowski, Piotr; Augoff, Michal; Zieba, Maciej; Drwal, Maciej; Gonczarek, Adam; Walczak, Michal J

    2018-03-14

    Automated selection of signals in protein NMR spectra, known as peak picking, has been studied for over 20 years, nevertheless existing peak picking methods are still largely deficient. Accurate and precise automated peak picking would accelerate the structure calculation, and analysis of dynamics and interactions of macromolecules. Recent advancement in handling big data, together with an outburst of machine learning techniques, offer an opportunity to tackle the peak picking problem substantially faster than manual picking and on par with human accuracy. In particular, deep learning has proven to systematically achieve human-level performance in various recognition tasks, and thus emerges as an ideal tool to address automated identification of NMR signals. We have applied a convolutional neural network for visual analysis of multidimensional NMR spectra. A comprehensive test on 31 manually-annotated spectra has demonstrated top-tier average precision (AP) of 0.9596, 0.9058 and 0.8271 for backbone, side-chain and NOESY spectra, respectively. Furthermore, a combination of extracted peak lists with automated assignment routine, FLYA, outperformed other methods, including the manual one, and led to correct resonance assignment at the levels of 90.40%, 89.90% and 90.20% for three benchmark proteins. The proposed model is a part of a Dumpling software (platform for protein NMR data analysis), and is available at https://dumpling.bio/. michaljerzywalczak@gmail.compiotr.klukowski@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  14. Towards the automated identification of Chrysomya blow flies from wing images.

    PubMed

    Macleod, N; Hall, M J R; Wardhana, A H

    2018-04-15

    The Old World screwworm fly (OWSF), Chrysomya bezziana (Diptera: Calliphoridae), is an important agent of traumatic myiasis and, as such, a major human and animal health problem. In the implementation of OWSF control operations, it is important to determine the geographical origins of such disease-causing species in order to establish whether they derive from endemic or invading populations. Gross morphological and molecular studies have demonstrated the existence of two distinct lineages of this species, one African and the other Asian. Wing morphometry is known to be of substantial assistance in identifying the geographical origin of individuals because it provides diagnostic markers that complement molecular diagnostics. However, placement of the landmarks used in traditional geometric morphometric analysis can be time-consuming and subject to error caused by operator subjectivity. Here we report results of an image-based approach to geometric morphometric analysis for delivering wing-based identifications. Our results indicate that this approach can produce identifications that are practically indistinguishable from more traditional landmark-based results. In addition, we demonstrate that the direct analysis of digital wing images can be used to discriminate between three Chrysomya species of veterinary and forensic importance and between C. bezziana genders. © 2018 The Trustees of the Natural History Museum, London. Medical and Veterinary Entomology © 2018 Royal Entomological Society.

  15. On automating domain connectivity for overset grids

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1994-01-01

    An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximated body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian Systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems.

  16. Autonomy and Automation

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  17. Automated Identification of Coronal Holes from Synoptic EUV Maps

    NASA Astrophysics Data System (ADS)

    Hamada, Amr; Asikainen, Timo; Virtanen, Ilpo; Mursula, Kalevi

    2018-04-01

    Coronal holes (CHs) are regions of open magnetic field lines in the solar corona and the source of the fast solar wind. Understanding the evolution of coronal holes is critical for solar magnetism as well as for accurate space weather forecasts. We study the extreme ultraviolet (EUV) synoptic maps at three wavelengths (195 Å/193 Å, 171 Å and 304 Å) measured by the Solar and Heliospheric Observatory/Extreme Ultraviolet Imaging Telescope (SOHO/EIT) and the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) instruments. The two datasets are first homogenized by scaling the SDO/AIA data to the SOHO/EIT level by means of histogram equalization. We then develop a novel automated method to identify CHs from these homogenized maps by determining the intensity threshold of CH regions separately for each synoptic map. This is done by identifying the best location and size of an image segment, which optimally contains portions of coronal holes and the surrounding quiet Sun allowing us to detect the momentary intensity threshold. Our method is thus able to adjust itself to the changing scale size of coronal holes and to temporally varying intensities. To make full use of the information in the three wavelengths we construct a composite CH distribution, which is more robust than distributions based on one wavelength. Using the composite CH dataset we discuss the temporal evolution of CHs during the Solar Cycles 23 and 24.

  18. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Hammer, J. M.; Mitchell, C. M.; Morris, N. M.; Lewis, C. M.; Yoon, W. C.

    1985-01-01

    Progress was made in the three following areas. In the rule-based modeling area, two papers related to identification and significane testing of rule-based models were presented. In the area of operator aiding, research focused on aiding operators in novel failure situations; a discrete control modeling approach to aiding PLANT operators was developed; and a set of guidelines were developed for implementing automation. In the area of flight simulator hardware and software, the hardware will be completed within two months and initial simulation software will then be integrated and tested.

  19. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  20. Radio Galaxy Zoo: Machine learning for radio source host galaxy cross-identification

    NASA Astrophysics Data System (ADS)

    Alger, M. J.; Banfield, J. K.; Ong, C. S.; Rudnick, L.; Wong, O. I.; Wolf, C.; Andernach, H.; Norris, R. P.; Shabala, S. S.

    2018-05-01

    We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Automated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data.

  1. Automated saccharification assay for determination of digestibility in plant materials.

    PubMed

    Gomez, Leonardo D; Whitehead, Caragh; Barakate, Abdellah; Halpin, Claire; McQueen-Mason, Simon J

    2010-10-27

    digestibility of certain lignin modified lines in a manner compatible with known effects of lignin modification on cell wall properties. We conclude that this automated assay platform is of sufficient sensitivity and reliability to undertake the screening of the large populations of plants necessary for mutant identification and genetic association studies.

  2. Identification of Motile Aeromonas Strains with the MicroScan WalkAway System in Conjunction with the Combo Negative Type 1S Panels

    PubMed Central

    Vivas, J.; Sáa, A. I.; Tinajas, A.; Barbeyto, L.; Rodríguez, L. A.

    2000-01-01

    This study was performed to compare the MicroScan WalkAway automated identification system in conjunction with the new MicroScan Combo Negative type 1S panels with conventional biochemical methods for identifying 85 environmental, clinical, and reference strains of eight Aeromonas species. PMID:10742279

  3. Space station automation study. Volume 1: Executive summary. Autonomous systems and assembly

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The purpose of the Space Station Automation Study (SSAS) was to develop informed technical guidance for NASA personnel in the use of autonomy and autonomous systems to implement space station functions.

  4. Emerging New Strategies for Successful Metabolite Identification in Metabolomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bingol, Ahmet K.; Bruschweiler-Li, Lei; Li, Dawei

    2016-02-26

    NMR is a very powerful tool for the identification of known and unknown (or unnamed) metabolites in complex mixtures as encountered in metabolomics. Known compounds can be reliably identified using 2D NMR methods, such as 13C-1H HSQC, for which powerful web servers with databases are available for semi-automated analysis. For the identification of unknown compounds, new combinations of NMR with MS have been developed recently that make synergistic use of the mutual strengths of the two techniques. The use of chemical additives to the NMR tube, such as reactive agents, paramagnetic ions, or charged silica nanoparticles, permit the identification ofmore » metabolites with specific physical chemical properties. In the following sections, we give an overview of some of the recent advances in metabolite identification and discuss remaining challenges.« less

  5. Volpe Center Report on Advanced Automation System Benefit-Cost Study: Final Report

    DOT National Transportation Integrated Search

    1993-10-25

    The Volpe Center study of the benefits and costs of the AAS approached the analysis by segments rather than as a whole system. The study concentrated on the automation aspects of the A TC system and applied conservative assumptions to the estimation ...

  6. High Resolution Ultrasonic Method for 3D Fingerprint Recognizable Characteristics in Biometrics Identification

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.; Bakulin, E. Yu.; Maeva, A.; Severin, F.

    Biometrics is a rapidly evolving scientific and applied discipline that studies possible ways of personal identification by means of unique biological characteristics. Such identification is important in various situations requiring restricted access to certain areas, information and personal data and for cases of medical emergencies. A number of automated biometric techniques have been developed, including fingerprint, hand shape, eye and facial recognition, thermographic imaging, etc. All these techniques differ in the recognizable parameters, usability, accuracy and cost. Among these, fingerprint recognition stands alone since a very large database of fingerprints has already been acquired. Also, fingerprints are key evidence left at a crime scene and can be used to indentify suspects. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. We introduce a newer development of the ultrasonic fingerprint imaging. The proposed method obtains a scan only once and then varies the C-scan gate position and width to visualize acoustic reflections from any appropriate depth inside the skin. Also, B-scans and A-scans can be recreated from any position using such data array, which gives the control over the visualization options. By setting the C-scan gate deeper inside the skin, distribution of the sweat pores (which are located along the ridges) can be easily visualized. This distribution should be unique for each individual so this provides a means of personal identification, which is not affected by any changes (accidental or intentional) of the fingers' surface conditions. This paper discusses different setups, acoustic parameters of the system, signal and image processing options and possible ways of 3-dimentional visualization that could be used as a recognizable characteristic in biometric identification.

  7. Visual Recognition Software for Binary Classification and its Application to Pollen Identification

    NASA Astrophysics Data System (ADS)

    Punyasena, S. W.; Tcheng, D. K.; Nayak, A.

    2014-12-01

    An underappreciated source of uncertainty in paleoecology is the uncertainty of palynological identifications. The confidence of any given identification is not regularly reported in published results, so cannot be incorporated into subsequent meta-analyses. Automated identifications systems potentially provide a means of objectively measuring the confidence of a given count or single identification, as well as a mechanism for increasing sample sizes and throughput. We developed the software ARLO (Automated Recognition with Layered Optimization) to tackle difficult visual classification problems such as pollen identification. ARLO applies pattern recognition and machine learning to the analysis of pollen images. The features that the system discovers are not the traditional features of pollen morphology. Instead, general purpose image features, such as pixel lines and grids of different dimensions, size, spacing, and resolution, are used. ARLO adapts to a given problem by searching for the most effective combination of feature representation and learning strategy. We present a two phase approach which uses our machine learning process to first segment pollen grains from the background and then classify pollen pixels and report species ratios. We conducted two separate experiments that utilized two distinct sets of algorithms and optimization procedures. The first analysis focused on reconstructing black and white spruce pollen ratios, training and testing our classification model at the slide level. This allowed us to directly compare our automated counts and expert counts to slides of known spruce ratios. Our second analysis focused on maximizing classification accuracy at the individual pollen grain level. Instead of predicting ratios of given slides, we predicted the species represented in a given image window. The resulting analysis was more scalable, as we were able to adapt the most efficient parts of the methodology from our first analysis. ARLO was able to

  8. Automated detection of solar eruptions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N.

    2015-12-01

    Observation of the solar atmosphere reveals a wide range of motions, from small scale jets and spicules to global-scale coronal mass ejections (CMEs). Identifying and characterizing these motions are essential to advancing our understanding of the drivers of space weather. Both automated and visual identifications are currently used in identifying Coronal Mass Ejections. To date, eruptions near the solar surface, which may be precursors to CMEs, have been identified primarily by visual inspection. Here we report on Eruption Patrol (EP): a software module that is designed to automatically identify eruptions from data collected by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (SDO/AIA). We describe the method underlying the module and compare its results to previous identifications found in the Heliophysics Event Knowledgebase. EP identifies eruptions events that are consistent with those found by human annotations, but in a significantly more consistent and quantitative manner. Eruptions are found to be distributed within 15 Mm of the solar surface. They possess peak speeds ranging from 4 to 100 km/s and display a power-law probability distribution over that range. These characteristics are consistent with previous observations of prominences.

  9. Diagnostic Accuracy and Effectiveness of Automated Electronic Sepsis Alert Systems: A Systematic Review

    PubMed Central

    Makam, Anil N.; Nguyen, Oanh K.; Auerbach, Andrew D.

    2015-01-01

    Background Although timely treatment of sepsis improves outcomes, delays in administering evidence-based therapies are common. Purpose To determine whether automated real-time electronic sepsis alerts can: 1) accurately identify sepsis, and 2) improve process measures and outcomes. Data Sources We systematically searched MEDLINE, Embase, The Cochrane Library, and CINAHL from database inception through June 27, 2014. Study Selection Included studies that empirically evaluated one or both of the prespecified objectives. Data Extraction Two independent reviewers extracted data and assessed the risk of bias. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive (PPV) and negative predictive values (NPV) and likelihood ratios (LR). Effectiveness was assessed by changes in sepsis care process measures and outcomes. Data Synthesis Of 1,293 citations, 8 studies met inclusion criteria, 5 for the identification of sepsis (n=35,423) and 5 for the effectiveness of sepsis alerts (n=6,894). Though definition of sepsis alert thresholds varied, most included systemic inflammatory response syndrome criteria ± evidence of shock. Diagnostic accuracy varied greatly, with PPV ranging from 20.5-53.8%, NPV 76.5-99.7%; LR+ 1.2-145.8; and LR- 0.06-0.86. There was modest evidence for improvement in process measures (i.e., antibiotic escalation), but only among patients in non-critical care settings; there were no corresponding improvements in mortality or length of stay. Minimal data were reported on potential harms due to false positive alerts. Conclusions Automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor positive predictive value and do not improve mortality or length of stay. PMID:25758641

  10. Automated smoother for the numerical decoupling of dynamics models.

    PubMed

    Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S

    2007-08-21

    Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental

  11. Humans: still vital after all these years of automation.

    PubMed

    Parasuraman, Raja; Wickens, Christopher D

    2008-06-01

    The authors discuss empirical studies of human-automation interaction and their implications for automation design. Automation is prevalent in safety-critical systems and increasingly in everyday life. Many studies of human performance in automated systems have been conducted over the past 30 years. Developments in three areas are examined: levels and stages of automation, reliance on and compliance with automation, and adaptive automation. Automation applied to information analysis or decision-making functions leads to differential system performance benefits and costs that must be considered in choosing appropriate levels and stages of automation. Human user dependence on automated alerts and advisories reflects two components of operator trust, reliance and compliance, which are in turn determined by the threshold designers use to balance automation misses and false alarms. Finally, adaptive automation can provide additional benefits in balancing workload and maintaining the user's situation awareness, although more research is required to identify when adaptation should be user controlled or system driven. The past three decades of empirical research on humans and automation has provided a strong science base that can be used to guide the design of automated systems. This research can be applied to most current and future automated systems.

  12. Fast Metabolite Identification in Nuclear Magnetic Resonance Metabolomic Studies: Statistical Peak Sorting and Peak Overlap Detection for More Reliable Database Queries.

    PubMed

    Hoijemberg, Pablo A; Pelczer, István

    2018-01-05

    A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative

  13. The Impact of Office Automation on the Roles and Staffing Patterns of Office Employees: A Case Study.

    ERIC Educational Resources Information Center

    Goodrich, Elizabeth A.

    1989-01-01

    The study examined impact of office automation on the roles and staffing patterns of office employees at the National Institute of Neurological and Communicative Disorders and Stroke. Results of an interview study indicate that automation has had a favorable impact on the way work is accomplished and on the work environment. (Author/CH)

  14. Automated Extraction of Family History Information from Clinical Notes

    PubMed Central

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S.; Winden, Tamara J.; Carter, Elizabeth W.; Melton, Genevieve B.

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (“indicator phrases”), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications. PMID:25954443

  15. Introduction of automated blood pressure devices intended for a low resource setting in rural Tanzania.

    PubMed

    Baker, Elinor Chloe; Hezelgrave, Natasha; Magesa, Stephen M; Edmonds, Sally; de Greeff, Annemarie; Shennan, Andrew

    2012-04-01

    Regular blood pressure (BP) monitoring is a cost-effective means of early identification and management of hypertensive disease in pregnancy. In much of rural sub-Saharan Africa, the ability to take and act on accurate BP measurements is lacking as a result of poorly functioning or absent equipment and/or inadequate staff education. This study describes the feasibility of using validated automated BP devices suitable for low-resource settings (LRS) in primary health-care facilities in rural Tanzania. Following a primary survey, 19 BP devices were distributed to 11 clinics and re-assessed at one, three, six, 12 and 36 months. Devices were used frequently with high levels of user satisfaction and good durability. We conclude that the use of automated BP devices in LRS is feasible and sustainable. An assessment of their ability to reduce maternal and perinatal morbidity and mortality is vital.

  16. Does Automated Feedback Improve Writing Quality?

    ERIC Educational Resources Information Center

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  17. Automation-induced monitoring inefficiency: role of display location.

    PubMed

    Singh, I L; Molloy, R; Parasuraman, R

    1997-01-01

    Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

  18. Automation-induced monitoring inefficiency: role of display location

    NASA Technical Reports Server (NTRS)

    Singh, I. L.; Molloy, R.; Parasuraman, R.

    1997-01-01

    Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

  19. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. Spot the match – wildlife photo-identification using information theory

    PubMed Central

    Speed, Conrad W; Meekan, Mark G; Bradshaw, Corey JA

    2007-01-01

    Background Effective approaches for the management and conservation of wildlife populations require a sound knowledge of population demographics, and this is often only possible through mark-recapture studies. We applied an automated spot-recognition program (I3S) for matching natural markings of wildlife that is based on a novel information-theoretic approach to incorporate matching uncertainty. Using a photo-identification database of whale sharks (Rhincodon typus) as an example case, the information criterion (IC) algorithm we developed resulted in a parsimonious ranking of potential matches of individuals in an image library. Automated matches were compared to manual-matching results to test the performance of the software and algorithm. Results Validation of matched and non-matched images provided a threshold IC weight (approximately 0.2) below which match certainty was not assured. Most images tested were assigned correctly; however, scores for the by-eye comparison were lower than expected, possibly due to the low sample size. The effect of increasing horizontal angle of sharks in images reduced matching likelihood considerably. There was a negative linear relationship between the number of matching spot pairs and matching score, but this relationship disappeared when using the IC algorithm. Conclusion The software and use of easily applied information-theoretic scores of match parsimony provide a reliable and freely available method for individual identification of wildlife, with wide applications and the potential to improve mark-recapture studies without resorting to invasive marking techniques. PMID:17227581

  1. Automated Identification of Medically Important Bacteria by 16S rRNA Gene Sequencing Using a Novel Comprehensive Database, 16SpathDB▿

    PubMed Central

    Woo, Patrick C. Y.; Teng, Jade L. L.; Yeung, Juilian M. Y.; Tse, Herman; Lau, Susanna K. P.; Yuen, Kwok-Yung

    2011-01-01

    Despite the increasing use of 16S rRNA gene sequencing, interpretation of 16S rRNA gene sequence results is one of the most difficult problems faced by clinical microbiologists and technicians. To overcome the problems we encountered in the existing databases during 16S rRNA gene sequence interpretation, we built a comprehensive database, 16SpathDB (http://147.8.74.24/16SpathDB) based on the 16S rRNA gene sequences of all medically important bacteria listed in the Manual of Clinical Microbiology and evaluated its use for automated identification of these bacteria. Among 91 nonduplicated bacterial isolates collected in our clinical microbiology laboratory, 71 (78%) were reported by 16SpathDB as a single bacterial species having >98.0% nucleotide identity with the query sequence, 19 (20.9%) were reported as more than one bacterial species having >98.0% nucleotide identity with the query sequence, and 1 (1.1%) was reported as no match. For the 71 bacterial isolates reported as a single bacterial species, all results were identical to their true identities as determined by a polyphasic approach. For the 19 bacterial isolates reported as more than one bacterial species, all results contained their true identities as determined by a polyphasic approach and all of them had their true identities as the “best match in 16SpathDB.” For the isolate (Gordonibacter pamelaeae) reported as no match, the bacterium has never been reported to be associated with human disease and was not included in the Manual of Clinical Microbiology. 16SpathDB is an automated, user-friendly, efficient, accurate, and regularly updated database for 16S rRNA gene sequence interpretation in clinical microbiology laboratories. PMID:21389154

  2. Evolution paths for advanced automation

    NASA Technical Reports Server (NTRS)

    Healey, Kathleen J.

    1990-01-01

    As Space Station Freedom (SSF) evolves, increased automation and autonomy will be required to meet Space Station Freedom Program (SSFP) objectives. As a precursor to the use of advanced automation within the SSFP, especially if it is to be used on SSF (e.g., to automate the operation of the flight systems), the underlying technologies will need to be elevated to a high level of readiness to ensure safe and effective operations. Ground facilities supporting the development of these flight systems -- from research and development laboratories through formal hardware and software development environments -- will be responsible for achieving these levels of technology readiness. These facilities will need to evolve support the general evolution of the SSFP. This evolution will include support for increasing the use of advanced automation. The SSF Advanced Development Program has funded a study to define evolution paths for advanced automaton within the SSFP's ground-based facilities which will enable, promote, and accelerate the appropriate use of advanced automation on-board SSF. The current capability of the test beds and facilities, such as the Software Support Environment, with regard to advanced automation, has been assessed and their desired evolutionary capabilities have been defined. Plans and guidelines for achieving this necessary capability have been constructed. The approach taken has combined indepth interviews of test beds personnel at all SSF Work Package centers with awareness of relevant state-of-the-art technology and technology insertion methodologies. Key recommendations from the study include advocating a NASA-wide task force for advanced automation, and the creation of software prototype transition environments to facilitate the incorporation of advanced automation in the SSFP.

  3. Automated flow cytometric analysis across large numbers of samples and cell types.

    PubMed

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  4. The value of automated gel column agglutination technology in the identification of true inherited D blood types in massively transfused patients.

    PubMed

    Summers, Thomas; Johnson, Viviana V; Stephan, John P; Johnson, Gloria J; Leonard, George

    2009-08-01

    Massive transfusion of D- trauma patients in the combat setting involves the use of D+ red blood cells (RBCs) or whole blood along with suboptimal pretransfusion test result documentation. This presents challenges to the transfusion service of tertiary care military hospitals who ultimately receive these casualties because initial D typing results may only reflect the transfused RBCs. After patients are stabilized, mixed-field reaction results on D typing indicate the patient's true inherited D phenotype. This case series illustrates the utility of automated gel column agglutination in detecting mixed-field reactions in these patients. The transfusion service test results, including the automated gel column agglutination D typing results, of four massively transfused D- patients transfused D+ RBCs is presented. To test the sensitivity of the automated gel column agglutination method in detecting mixed-field agglutination reactions, a comparative analysis of three automated technologies using predetermined mixtures of D+ and D- RBCs is also presented. The automated gel column agglutination method detected mixed-field agglutination in D typing in all four patients and in the three prepared control specimens. The automated microwell tube method identified one of the three prepared control specimens as indeterminate, which was subsequently manually confirmed as a mixed-field reaction. The automated solid-phase method was unable to detect any mixed fields. The automated gel column agglutination method provides a sensitive means for detecting mixed-field agglutination reactions in the determination of the true inherited D phenotype of combat casualties transfused massive amounts of D+ RBCs.

  5. Automated selected reaction monitoring data analysis workflow for large-scale targeted proteomic studies.

    PubMed

    Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi

    2013-08-01

    Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.

  6. Team-Centered Perspective for Adaptive Automation Design

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III

    2003-01-01

    Automation represents a very active area of human factors research. The journal, Human Factors, published a special issue on automation in 1985. Since then, hundreds of scientific studies have been published examining the nature of automation and its interaction with human performance. However, despite a dramatic increase in research investigating human factors issues in aviation automation, there remain areas that need further exploration. This NASA Technical Memorandum describes a new area of automation design and research, called adaptive automation. It discusses the concepts and outlines the human factors issues associated with the new method of adaptive function allocation. The primary focus is on human-centered design, and specifically on ensuring that adaptive automation is from a team-centered perspective. The document shows that adaptive automation has many human factors issues common to traditional automation design. Much like the introduction of other new technologies and paradigm shifts, adaptive automation presents an opportunity to remediate current problems but poses new ones for human-automation interaction in aerospace operations. The review here is intended to communicate the philosophical perspective and direction of adaptive automation research conducted under the Aerospace Operations Systems (AOS), Physiological and Psychological Stressors and Factors (PPSF) project.

  7. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    PubMed

    Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J

    2016-01-01

    To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  8. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    PubMed

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  9. Automated detection of diabetic retinopathy on digital fundus images.

    PubMed

    Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D

    2002-02-01

    The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.

  10. Production system chunking in SOAR: Case studies in automated learning

    NASA Technical Reports Server (NTRS)

    Allen, Robert

    1989-01-01

    A preliminary study of SOAR, a general intelligent architecture for automated problem solving and learning, is presented. The underlying principles of universal subgoaling and chunking were applied to a simple, yet representative, problem in artificial intelligence. A number of problem space representations were examined and compared. It is concluded that learning is an inherent and beneficial aspect of problem solving. Additional studies are suggested in domains relevant to mission planning and to SOAR itself.

  11. Category identification of changed land-use polygons in an integrated image processing/geographic information system

    NASA Technical Reports Server (NTRS)

    Westmoreland, Sally; Stow, Douglas A.

    1992-01-01

    A framework is proposed for analyzing ancillary data and developing procedures for incorporating ancillary data to aid interactive identification of land-use categories in land-use updates. The procedures were developed for use within an integrated image processsing/geographic information systems (GIS) that permits simultaneous display of digital image data with the vector land-use data to be updated. With such systems and procedures, automated techniques are integrated with visual-based manual interpretation to exploit the capabilities of both. The procedural framework developed was applied as part of a case study to update a portion of the land-use layer in a regional scale GIS. About 75 percent of the area in the study site that experienced a change in land use was correctly labeled into 19 categories using the combination of automated and visual interpretation procedures developed in the study.

  12. Continuous-flow automation and hemolysis index: a crucial combination.

    PubMed

    Lippi, Giuseppe; Plebani, Mario

    2013-04-01

    A paradigm shift has occurred in the role and organization of laboratory diagnostics over the past decades, wherein consolidation or networking of small laboratories into larger factories and point-of-care testing have simultaneously evolved and now seem to favorably coexist. There is now evidence, however, that the growing implementation of continuous-flow automation, especially in closed systems, has not eased the identification of hemolyzed specimens since the integration of preanalytical and analytical workstations would hide them from visual scrutiny, with an inherent risk that unreliable test results may be released to the stakeholders. Along with other technical breakthroughs, the new generation of laboratory instrumentation is increasingly equipped with systems that can systematically and automatically be tested for a broad series of interferences, the so-called serum indices, which also include the hemolysis index. The routine implementation of these technical tools in clinical laboratories equipped with continuous-flow automation carries several advantages and some drawbacks that are discussed in this article.

  13. The Automation-by-Expertise-by-Training Interaction.

    PubMed

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  14. Automated feature extraction for retinal vascular biometry in zebrafish using OCT angiography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; Rao, Gopikrishna M.; Desai, Vineet; Tao, Yuankai K.

    2017-02-01

    Zebrafish have been identified as an ideal model for angiogenesis because of anatomical and functional similarities with other vertebrates. The scale and complexity of zebrafish assays are limited by the need to manually treat and serially screen animals, and recent technological advances have focused on automation and improving throughput. Here, we use optical coherence tomography (OCT) and OCT angiography (OCT-A) to perform noninvasive, in vivo imaging of retinal vasculature in zebrafish. OCT-A summed voxel projections were low pass filtered and skeletonized to create an en face vascular map prior to connectivity analysis. Vascular segmentation was referenced to the optic nerve head (ONH), which was identified by automatically segmenting the retinal pigment epithelium boundary on the OCT structural volume. The first vessel branch generation was identified as skeleton segments with branch points closest to the ONH, and subsequent generations were found iteratively by expanding the search space outwards from the ONH. Biometric parameters, including length, curvature, and branch angle of each vessel segment were calculated and grouped by branch generation. Despite manual handling and alignment of each animal over multiple time points, we observe distinct qualitative patterns that enable unique identification of each eye from individual animals. We believe this OCT-based retinal biometry method can be applied for automated animal identification and handling in high-throughput organism-level pharmacological assays and genetic screens. In addition, these extracted features may enable high-resolution quantification of longitudinal vascular changes as a method for studying zebrafish models of retinal neovascularization and vascular remodeling.

  15. Genetic fingerprinting proves cross-correlated automatic photo-identification of individuals as highly efficient in large capture–mark–recapture studies

    PubMed Central

    Drechsler, Axel; Helling, Tobias; Steinfartz, Sebastian

    2015-01-01

    Capture–mark–recapture (CMR) approaches are the backbone of many studies in population ecology to gain insight on the life cycle, migration, habitat use, and demography of target species. The reliable and repeatable recognition of an individual throughout its lifetime is the basic requirement of a CMR study. Although invasive techniques are available to mark individuals permanently, noninvasive methods for individual recognition mainly rest on photographic identification of external body markings, which are unique at the individual level. The re-identification of an individual based on comparing shape patterns of photographs by eye is commonly used. Automated processes for photographic re-identification have been recently established, but their performance in large datasets (i.e., > 1000 individuals) has rarely been tested thoroughly. Here, we evaluated the performance of the program AMPHIDENT, an automatic algorithm to identify individuals on the basis of ventral spot patterns in the great crested newt (Triturus cristatus) versus the genotypic fingerprint of individuals based on highly polymorphic microsatellite loci using GENECAP. Between 2008 and 2010, we captured, sampled and photographed adult newts and calculated for 1648 samples/photographs recapture rates for both approaches. Recapture rates differed slightly with 8.34% for GENECAP and 9.83% for AMPHIDENT. With an estimated rate of 2% false rejections (FRR) and 0.00% false acceptances (FAR), AMPHIDENT proved to be a highly reliable algorithm for CMR studies of large datasets. We conclude that the application of automatic recognition software of individual photographs can be a rather powerful and reliable tool in noninvasive CMR studies for a large number of individuals. Because the cross-correlation of standardized shape patterns is generally applicable to any pattern that provides enough information, this algorithm is capable of becoming a single application with broad use in CMR studies for many

  16. Automation's Effect on Library Personnel.

    ERIC Educational Resources Information Center

    Dakshinamurti, Ganga

    1985-01-01

    Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…

  17. Experimental studies on the effect of automation on pilot situational awareness in the datalink ATC environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. J., Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of ATC clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming of datalinked clearances and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  18. Is partially automated driving a bad idea? Observations from an on-road study.

    PubMed

    Banks, Victoria A; Eriksson, Alexander; O'Donoghue, Jim; Stanton, Neville A

    2018-04-01

    The automation of longitudinal and lateral control has enabled drivers to become "hands and feet free" but they are required to remain in an active monitoring state with a requirement to resume manual control if required. This represents the single largest allocation of system function problem with vehicle automation as the literature suggests that humans are notoriously inefficient at completing prolonged monitoring tasks. To further explore whether partially automated driving solutions can appropriately support the driver in completing their new monitoring role, video observations were collected as part of an on-road study using a Tesla Model S being operated in Autopilot mode. A thematic analysis of video data suggests that drivers are not being properly supported in adhering to their new monitoring responsibilities and instead demonstrate behaviour indicative of complacency and over-trust. These attributes may encourage drivers to take more risks whilst out on the road. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Space biology initiative program definition review. Trade study 1: Automation costs versus crew utilization

    NASA Technical Reports Server (NTRS)

    Jackson, L. Neal; Crenshaw, John, Sr.; Hambright, R. N.; Nedungadi, A.; Mcfayden, G. M.; Tsuchida, M. S.

    1989-01-01

    A significant emphasis upon automation within the Space Biology Initiative hardware appears justified in order to conserve crew labor and crew training effort. Two generic forms of automation were identified: automation of data and information handling and decision making, and the automation of material handling, transfer, and processing. The use of automatic data acquisition, expert systems, robots, and machine vision will increase the volume of experiments and quality of results. The automation described may also influence efforts to miniaturize and modularize the large array of SBI hardware identified to date. The cost and benefit model developed appears to be a useful guideline for SBI equipment specifiers and designers. Additional refinements would enhance the validity of the model. Two NASA automation pilot programs, 'The Principal Investigator in a Box' and 'Rack Mounted Robots' were investigated and found to be quite appropriate for adaptation to the SBI program. There are other in-house NASA efforts that provide technology that may be appropriate for the SBI program. Important data is believed to exist in advanced medical labs throughout the U.S., Japan, and Europe. The information and data processing in medical analysis equipment is highly automated and future trends reveal continued progress in this area. However, automation of material handling and processing has progressed in a limited manner because the medical labs are not affected by the power and space constraints that Space Station medical equipment is faced with. Therefore, NASA's major emphasis in automation will require a lead effort in the automation of material handling to achieve optimal crew utilization.

  20. Misidentification of Yersinia pestis by automated systems, resulting in delayed diagnoses of human plague infections--Oregon and New Mexico, 2010-2011.

    PubMed

    Tourdjman, Mathieu; Ibraheem, Mam; Brett, Meghan; Debess, Emilio; Progulske, Barbara; Ettestad, Paul; McGivern, Teresa; Petersen, Jeannine; Mead, Paul

    2012-10-01

    One human plague case was reported in Oregon in September 2010 and another in New Mexico in May 2011. Misidentification of Yersinia pestis by automated identification systems contributed to delayed diagnoses for both cases.

  1. SU-G-206-01: A Fully Automated CT Tool to Facilitate Phantom Image QA for Quantitative Imaging in Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Lo, P; Kim, H

    Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include

  2. Automated method for study of drug metabolism

    NASA Technical Reports Server (NTRS)

    Furner, R. L.; Feller, D. D.

    1973-01-01

    Commercially available equipment can be modified to provide automated system for assaying drug metabolism by continuous flow-through. System includes steps and devices for mixing drug with enzyme and cofactor in the presence of pure oxygen, dialyzing resulting metabolite against buffer, and determining amount of metabolite by colorimetric method.

  3. Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.

    PubMed

    Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin

    2018-05-23

    Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.

  4. Automations influence on nuclear power plants: a look at three accidents and how automation played a role.

    PubMed

    Schmitt, Kara

    2012-01-01

    Nuclear power is one of the ways that we can design an efficient sustainable future. Automation is the primary system used to assist operators in the task of monitoring and controlling nuclear power plants (NPP). Automation performs tasks such as assessing the status of the plant's operations as well as making real time life critical situational specific decisions. While the advantages and disadvantages of automation are well studied in variety of domains, accidents remind us that there is still vulnerability to unknown variables. This paper will look at the effects of automation within three NPP accidents and incidents and will consider why automation failed in preventing these accidents from occurring. It will also review the accidents at the Three Mile Island, Chernobyl, and Fukushima Daiichi NPP's in order to determine where better use of automation could have resulted in a more desirable outcome.

  5. Physiological Self-Regulation and Adaptive Automation

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J.; Pope, Alan T.; Freeman, Frederick G.

    2007-01-01

    Adaptive automation has been proposed as a solution to current problems of human-automation interaction. Past research has shown the potential of this advanced form of automation to enhance pilot engagement and lower cognitive workload. However, there have been concerns voiced regarding issues, such as automation surprises, associated with the use of adaptive automation. This study examined the use of psychophysiological self-regulation training with adaptive automation that may help pilots deal with these problems through the enhancement of cognitive resource management skills. Eighteen participants were assigned to 3 groups (self-regulation training, false feedback, and control) and performed resource management, monitoring, and tracking tasks from the Multiple Attribute Task Battery. The tracking task was cycled between 3 levels of task difficulty (automatic, adaptive aiding, manual) on the basis of the electroencephalogram-derived engagement index. The other two tasks remained in automatic mode that had a single automation failure. Those participants who had received self-regulation training performed significantly better and reported lower National Aeronautics and Space Administration Task Load Index scores than participants in the false feedback and control groups. The theoretical and practical implications of these results for adaptive automation are discussed.

  6. Automated reconstruction of rainfall events responsible for shallow landslides

    NASA Astrophysics Data System (ADS)

    Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.

    2014-04-01

    Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.

  7. Stages and levels of automation in support of space teleoperations.

    PubMed

    Li, Huiyang; Wickens, Christopher D; Sarter, Nadine; Sebok, Angelia

    2014-09-01

    This study examined the impact of stage of automation on the performance and perceived workload during simulated robotic arm control tasks in routine and off-nominal scenarios. Automation varies with respect to the stage of information processing it supports and its assigned level of automation. Making appropriate choices in terms of stages and levels of automation is critical to ensure robust joint system performance. To date, this issue has been empirically studied in domains such as aviation and medicine but not extensively in the context of space operations. A total of 36 participants played the role of a payload specialist and controlled a simulated robotic arm. Participants performed fly-to tasks with two types of automation (camera recommendation and trajectory control automation) of varying stage. Tasks were performed during routine scenarios and in scenarios in which either the trajectory control automation or a hazard avoidance automation failed. Increasing the stage of automation progressively improved performance and lowered workload when the automation was reliable, but incurred severe performance costs when the system failed. The results from this study support concerns about automation-induced complacency and automation bias when later stages of automation are introduced. The benefits of such automation are offset by the risk of catastrophic outcomes when system failures go unnoticed or become difficult to recover from. A medium stage of automation seems preferable as it provides sufficient support during routine operations and helps avoid potentially catastrophic outcomes in circumstances when the automation fails.

  8. Lessons learned from gene identification studies in Mendelian epilepsy disorders

    PubMed Central

    Hardies, Katia; Weckhuysen, Sarah; De Jonghe, Peter; Suls, Arvid

    2016-01-01

    Next-generation sequencing (NGS) technologies are now routinely used for gene identification in Mendelian disorders. Setting up cost-efficient NGS projects and managing the large amount of variants remains, however, a challenging job. Here we provide insights in the decision-making processes before and after the use of NGS in gene identification studies. Genetic factors are thought to have a role in ~70% of all epilepsies, and a variety of inheritance patterns have been described for seizure-associated gene defects. We therefore chose epilepsy as disease model and selected 35 NGS studies that focused on patients with a Mendelian epilepsy disorder. The strategies used for gene identification and their respective outcomes were reviewed. High-throughput NGS strategies have led to the identification of several new epilepsy-causing genes, enlarging our knowledge on both known and novel pathomechanisms. NGS findings have furthermore extended the awareness of phenotypical and genetic heterogeneity. By discussing recent studies we illustrate: (I) the power of NGS for gene identification in Mendelian disorders, (II) the accelerating pace in which this field evolves, and (III) the considerations that have to be made when performing NGS studies. Nonetheless, the enormous rise in gene discovery over the last decade, many patients and families included in gene identification studies still remain without a molecular diagnosis; hence, further genetic research is warranted. On the basis of successful NGS studies in epilepsy, we discuss general approaches to guide human geneticists and clinicians in setting up cost-efficient gene identification NGS studies. PMID:26603999

  9. Automation of ALK gene rearrangement testing with fluorescence in situ hybridization (FISH): a feasibility study.

    PubMed

    Zwaenepoel, Karen; Merkle, Dennis; Cabillic, Florian; Berg, Erica; Belaud-Rotureau, Marc-Antoine; Grazioli, Vittorio; Herelle, Olga; Hummel, Michael; Le Calve, Michele; Lenze, Dido; Mende, Stefanie; Pauwels, Patrick; Quilichini, Benoit; Repetti, Elena

    2015-02-01

    In the past several years we have observed a significant increase in our understanding of molecular mechanisms that drive lung cancer. Specifically in the non-small cell lung cancer sub-types, ALK gene rearrangements represent a sub-group of tumors that are targetable by the tyrosine kinase inhibitor Crizotinib, resulting in significant reductions in tumor burden. Phase II and III clinical trials were performed using an ALK break-apart FISH probe kit, making FISH the gold standard for identifying ALK rearrangements in patients. FISH is often considered a labor and cost intensive molecular technique, and in this study we aimed to demonstrate feasibility for automation of ALK FISH testing, to improve laboratory workflow and ease of testing. This involved automation of the pre-treatment steps of the ALK assay using various protocols on the VP 2000 instrument, and facilitating automated scanning of the fluorescent FISH specimens for simplified enumeration on various backend scanning and analysis systems. The results indicated that ALK FISH can be automated. Significantly, both the Ikoniscope and BioView system of automated FISH scanning and analysis systems provided a robust analysis algorithm to define ALK rearrangements. In addition, the BioView system facilitated consultation of difficult cases via the internet. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Diagnostic accuracy and effectiveness of automated electronic sepsis alert systems: A systematic review.

    PubMed

    Makam, Anil N; Nguyen, Oanh K; Auerbach, Andrew D

    2015-06-01

    Although timely treatment of sepsis improves outcomes, delays in administering evidence-based therapies are common. To determine whether automated real-time electronic sepsis alerts can: (1) accurately identify sepsis and (2) improve process measures and outcomes. We systematically searched MEDLINE, Embase, The Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature from database inception through June 27, 2014. Included studies that empirically evaluated 1 or both of the prespecified objectives. Two independent reviewers extracted data and assessed the risk of bias. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Effectiveness was assessed by changes in sepsis care process measures and outcomes. Of 1293 citations, 8 studies met inclusion criteria, 5 for the identification of sepsis (n = 35,423) and 5 for the effectiveness of sepsis alerts (n = 6894). Though definition of sepsis alert thresholds varied, most included systemic inflammatory response syndrome criteria ± evidence of shock. Diagnostic accuracy varied greatly, with PPV ranging from 20.5% to 53.8%, NPV 76.5% to 99.7%, LR+ 1.2 to 145.8, and LR- 0.06 to 0.86. There was modest evidence for improvement in process measures (ie, antibiotic escalation), but only among patients in non-critical care settings; there were no corresponding improvements in mortality or length of stay. Minimal data were reported on potential harms due to false positive alerts. Automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor PPV and do not improve mortality or length of stay. © 2015 Society of Hospital Medicine.

  11. Automated classification of neurological disorders of gait using spatio-temporal gait parameters.

    PubMed

    Pradhan, Cauchy; Wuehr, Max; Akrami, Farhoud; Neuhaeusser, Maximilian; Huth, Sabrina; Brandt, Thomas; Jahn, Klaus; Schniepp, Roman

    2015-04-01

    Automated pattern recognition systems have been used for accurate identification of neurological conditions as well as the evaluation of the treatment outcomes. This study aims to determine the accuracy of diagnoses of (oto-)neurological gait disorders using different types of automated pattern recognition techniques. Clinically confirmed cases of phobic postural vertigo (N = 30), cerebellar ataxia (N = 30), progressive supranuclear palsy (N = 30), bilateral vestibulopathy (N = 30), as well as healthy subjects (N = 30) were recruited for the study. 8 measurements with 136 variables using a GAITRite(®) sensor carpet were obtained from each subject. Subjects were randomly divided into two groups (training cases and validation cases). Sensitivity and specificity of k-nearest neighbor (KNN), naive-bayes classifier (NB), artificial neural network (ANN), and support vector machine (SVM) in classifying the validation cases were calculated. ANN and SVM had the highest overall sensitivity with 90.6% and 92.0% respectively, followed by NB (76.0%) and KNN (73.3%). SVM and ANN showed high false negative rates for bilateral vestibulopathy cases (20.0% and 26.0%); while KNN and NB had high false negative rates for progressive supranuclear palsy cases (76.7% and 40.0%). Automated pattern recognition systems are able to identify pathological gait patterns and establish clinical diagnosis with good accuracy. SVM and ANN in particular differentiate gait patterns of several distinct oto-neurological disorders of gait with high sensitivity and specificity compared to KNN and NB. Both SVM and ANN appear to be a reliable diagnostic and management tool for disorders of gait. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Optical Coherence Tomography in the UK Biobank Study – Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies

    PubMed Central

    Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.

    2016-01-01

    Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837

  13. SHARP: Spacecraft Health Automated Reasoning Prototype

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1991-01-01

    The planetary spacecraft mission OPS as applied to SHARP is studied. Knowledge systems involved in this study are detailed. SHARP development task and Voyager telecom link analysis were examined. It was concluded that artificial intelligence has a proven capability to deliver useful functions in a real time space flight operations environment. SHARP has precipitated major change in acceptance of automation at JPL. The potential payoff from automation using AI is substantial. SHARP, and other AI technology is being transferred into systems in development including mission operations automation, science data systems, and infrastructure applications.

  14. Automated synovium segmentation in doppler ultrasound images for rheumatoid arthritis assessment

    NASA Astrophysics Data System (ADS)

    Yeung, Pak-Hei; Tan, York-Kiat; Xu, Shuoyu

    2018-02-01

    We need better clinical tools to improve monitoring of synovitis, synovial inflammation in the joints, in rheumatoid arthritis (RA) assessment. Given its economical, safe and fast characteristics, ultrasound (US) especially Doppler ultrasound is frequently used. However, manual scoring of synovitis in US images is subjective and prone to observer variations. In this study, we propose a new and robust method for automated synovium segmentation in the commonly affected joints, i.e. metacarpophalangeal (MCP) and metatarsophalangeal (MTP) joints, which would facilitate automation in quantitative RA assessment. The bone contour in the US image is firstly detected based on a modified dynamic programming method, incorporating angular information for detecting curved bone surface and using image fuzzification to identify missing bone structure. K-means clustering is then performed to initialize potential synovium areas by utilizing the identified bone contour as boundary reference. After excluding invalid candidate regions, the final segmented synovium is identified by reconnecting remaining candidate regions using level set evolution. 15 MCP and 15 MTP US images were analyzed in this study. For each image, segmentations by our proposed method as well as two sets of annotations performed by an experienced clinician at different time-points were acquired. Dice's coefficient is 0.77+/-0.12 between the two sets of annotations. Similar Dice's coefficients are achieved between automated segmentation and either the first set of annotations (0.76+/-0.12) or the second set of annotations (0.75+/-0.11), with no significant difference (P = 0.77). These results verify that the accuracy of segmentation by our proposed method and by clinician is comparable. Therefore, reliable synovium identification can be made by our proposed method.

  15. The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.

    PubMed

    Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J

    2017-11-01

    Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Frapid: achieving full automation of FRAP for chemical probe validation

    PubMed Central

    Yapp, Clarence; Rogers, Catherine; Savitsky, Pavel; Philpott, Martin; Müller, Susanne

    2016-01-01

    Fluorescence Recovery After Photobleaching (FRAP) is an established method for validating chemical probes against the chromatin reading bromodomains, but so far requires constant human supervision. Here, we present Frapid, an automated open source code implementation of FRAP that fully handles cell identification through fuzzy logic analysis, drug dispensing with a custom-built fluid handler, image acquisition & analysis, and reporting. We successfully tested Frapid on 3 bromodomains as well as on spindlin1 (SPIN1), a methyl lysine binder, for the first time. PMID:26977352

  17. Automated Title Page Cataloging: A Feasibility Study.

    ERIC Educational Resources Information Center

    Weibel, Stuart; And Others

    1989-01-01

    Describes the design of a prototype rule-based system for the automation of descriptive cataloging from title pages. The discussion covers the results of tests of the prototype, major impediments to automatic cataloging from title pages, and prospects for further progress. The rules implemented in the prototype are appended. (16 references)…

  18. Library Automation at the University for Development Studies: Challenges and Prospects

    ERIC Educational Resources Information Center

    Thompson, Edwin S.; Pwadura, Joana

    2014-01-01

    The automation of a library that basically aims at improving the management of the library's resources and increasing access to these same resources by users has caught on so well in the western world that virtually all academic libraries in that part of the world have automated most of their services. In Africa, however, several challenges are…

  19. Intraoperative Cochlear Implant Device Testing Utilizing an Automated Remote System: A Prospective Pilot Study.

    PubMed

    Lohmann, Amanda R; Carlson, Matthew L; Sladen, Douglas P

    2018-03-01

    Intraoperative cochlear implant device testing provides valuable information regarding device integrity, electrode position, and may assist with determining initial stimulation settings. Manual intraoperative device testing during cochlear implantation requires the time and expertise of a trained audiologist. The purpose of the current study is to investigate the feasibility of using automated remote intraoperative cochlear implant reverse telemetry testing as an alternative to standard testing. Prospective pilot study evaluating intraoperative remote automated impedance and Automatic Neural Response Telemetry (AutoNRT) testing in 34 consecutive cochlear implant surgeries using the Intraoperative Remote Assistant (Cochlear Nucleus CR120). In all cases, remote intraoperative device testing was performed by trained operating room staff. A comparison was made to the "gold standard" of manual testing by an experienced cochlear implant audiologist. Electrode position and absence of tip fold-over was confirmed using plain film x-ray. Automated remote reverse telemetry testing was successfully completed in all patients. Intraoperative x-ray demonstrated normal electrode position without tip fold-over. Average impedance values were significantly higher using standard testing versus CR120 remote testing (standard mean 10.7 kΩ, SD 1.2 vs. CR120 mean 7.5 kΩ, SD 0.7, p < 0.001). There was strong agreement between standard manual testing and remote automated testing with regard to the presence of open or short circuits along the array. There were, however, two cases in which standard testing identified an open circuit, when CR120 testing showed the circuit to be closed. Neural responses were successfully obtained in all patients using both systems. There was no difference in basal electrode responses (standard mean 195.0 μV, SD 14.10 vs. CR120 194.5 μV, SD 14.23; p = 0.7814); however, more favorable (lower μV amplitude) results were obtained with the remote

  20. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    PubMed

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A brief review of machine vision in the context of automated wood identification systems

    Treesearch

    John C. Hermanson; Alex C. Wiedenhoeft

    2011-01-01

    The need for accurate and rapid field identification of wood to combat illegal logging around the world is outpacing the ability to train personnel to perform this task. Despite increased interest in non-anatomical (DNA, spectroscopic, chemical) methods for wood identification, anatomical characteristics are the least labile data that can be extracted from solid wood...

  2. Use of recombinant salmonella flagellar hook protein (flgk) for detection of anti-salmonella antibodies in chickens by automated capillary immunoassay

    USDA-ARS?s Scientific Manuscript database

    Background: Conventional immunoblot assays are a very useful tool for specific protein identification, but are tedious, labor-intensive and time-consuming. An automated capillary electrophoresis-based immunoblot assay called "Simple Western" has recently been developed that enables the protein sepa...

  3. An Automated Solar Synoptic Analysis Software System

    NASA Astrophysics Data System (ADS)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  4. Preface to the special section on human factors and automation in vehicles: designing highly automated vehicles with the driver in mind.

    PubMed

    Merat, Natasha; Lee, John D

    2012-10-01

    This special section brings together diverse research regarding driver interaction with advanced automotive technology to guide design of increasingly automated vehicles. Rapidly evolving vehicle automation will likely change cars and trucks more in the next 5 years than the preceding 50, radically redefining what it means to drive. This special section includes 10 articles from European and North American researchers reporting simulator and naturalistic driving studies. Little research has considered the consequences of fully automated driving, with most focusing on lane-keeping and speed control systems individually. The studies reveal two underlying design philosophies: automate driving versus support driving. Results of several studies, consistent with previous research in other domains, suggest that the automate philosophy can delay driver responses to incidents in which the driver has to intervene and take control from the automation. Understanding how to orchestrate the transfer or sharing of control between the system and the driver, particularly in critical incidents, emerges as a central challenge. Designers should not assume that automation can substitute seamlessly for a human driver, nor can they assume that the driver can safely accommodate the limitations of automation. Designers, policy makers, and researchers must give careful consideration to what role the person should have in highly automated vehicles and how to support the driver if the driver is to be responsible for vehicle control. As in other domains, driving safety increasingly depends on the combined performance of the human and automation, and successful designs will depend on recognizing and supporting the new roles of the driver.

  5. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  6. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  7. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  8. Quantifying Vocal Mimicry in the Greater Racket-Tailed Drongo: A Comparison of Automated Methods and Human Assessment

    PubMed Central

    Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini

    2014-01-01

    Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717

  9. Psychosocial factors associated with intended use of automated vehicles: A simulated driving study.

    PubMed

    Buckley, Lisa; Kaye, Sherrie-Anne; Pradhan, Anuj K

    2018-06-01

    This study applied the Theory of Planned Behavior (TPB) and the Technology Acceptance Model (TAM) to assess drivers' intended use of automated vehicles (AVs) after undertaking a simulated driving task. In addition, this study explored the potential for trust to account for additional variance to the psychosocial factors in TPB and TAM. Seventy-four participants (51% female) aged between 25 and 64 years (M = 42.8, SD = 12.9) undertook a 20 min simulated experimental drive in which participants experienced periods of automated driving and manual control. A survey task followed. A hierarchical regression analysis revealed that TPB constructs; attitude toward the behavior, subjective norms, and perceived behavioral control, were significant predictors of intentions to use AV. In addition, there was partial support for the test of TAM, with ease of use (but not usefulness) predicting intended use of AV (SAE Level 3). Trust contributed variance to both models beyond TPB or TAM constructs. The findings provide an important insight into factors that might reflect intended use of vehicles that are primarily automated (longitudinal, lateral, and manoeuvre controls) but require and allow drivers to have periods of manual control. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Clinical Chemistry Laboratory Automation in the 21st Century - Amat Victoria curam (Victory loves careful preparation)

    PubMed Central

    Armbruster, David A; Overcash, David R; Reyes, Jaime

    2014-01-01

    The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory

  11. Clinical Chemistry Laboratory Automation in the 21st Century - Amat Victoria curam (Victory loves careful preparation).

    PubMed

    Armbruster, David A; Overcash, David R; Reyes, Jaime

    2014-08-01

    The era of automation arrived with the introduction of the AutoAnalyzer using continuous flow analysis and the Robot Chemist that automated the traditional manual analytical steps. Successive generations of stand-alone analysers increased analytical speed, offered the ability to test high volumes of patient specimens, and provided large assay menus. A dichotomy developed, with a group of analysers devoted to performing routine clinical chemistry tests and another group dedicated to performing immunoassays using a variety of methodologies. Development of integrated systems greatly improved the analytical phase of clinical laboratory testing and further automation was developed for pre-analytical procedures, such as sample identification, sorting, and centrifugation, and post-analytical procedures, such as specimen storage and archiving. All phases of testing were ultimately combined in total laboratory automation (TLA) through which all modules involved are physically linked by some kind of track system, moving samples through the process from beginning-to-end. A newer and very powerful, analytical methodology is liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS). LC-MS/MS has been automated but a future automation challenge will be to incorporate LC-MS/MS into TLA configurations. Another important facet of automation is informatics, including middleware, which interfaces the analyser software to a laboratory information systems (LIS) and/or hospital information systems (HIS). This software includes control of the overall operation of a TLA configuration and combines analytical results with patient demographic information to provide additional clinically useful information. This review describes automation relevant to clinical chemistry, but it must be recognised that automation applies to other specialties in the laboratory, e.g. haematology, urinalysis, microbiology. It is a given that automation will continue to evolve in the clinical laboratory

  12. Automation and decision support in interactive consumer products.

    PubMed

    Sauer, J; Rüttinger, B

    2007-06-01

    This article presents two empirical studies (n = 30, n = 48) that are concerned with different forms of automation in interactive consumer products. The goal of the studies was to evaluate the effectiveness of two types of automation: perceptual augmentation (i.e. supporting users' information acquisition and analysis); and control integration (i.e. supporting users' action selection and implementation). Furthermore, the effectiveness of on-product information (i.e. labels attached to product) in supporting automation design was evaluated. The findings suggested greater benefits for automation in control integration than in perceptual augmentation alone, which may be partly due to the specific requirements of consumer product usage. If employed appropriately, on-product information can be a helpful means of information conveyance. The article discusses the implications of automation design in interactive consumer products while drawing on automation models from the work environment.

  13. Identification of species based on DNA barcode using k-mer feature vector and Random forest classifier.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R

    2016-11-05

    DNA barcoding is a molecular diagnostic method that allows automated and accurate identification of species based on a short and standardized fragment of DNA. To this end, an attempt has been made in this study to develop a computational approach for identifying the species by comparing its barcode with the barcode sequence of known species present in the reference library. Each barcode sequence was first mapped onto a numeric feature vector based on k-mer frequencies and then Random forest methodology was employed on the transformed dataset for species identification. The proposed approach outperformed similarity-based, tree-based, diagnostic-based approaches and found comparable with existing supervised learning based approaches in terms of species identification success rate, while compared using real and simulated datasets. Based on the proposed approach, an online web interface SPIDBAR has also been developed and made freely available at http://cabgrid.res.in:8080/spidbar/ for species identification by the taxonomists. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    NASA Technical Reports Server (NTRS)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  15. Fully automated contour detection of the ascending aorta in cardiac 2D phase-contrast MRI.

    PubMed

    Codari, Marina; Scarabello, Marco; Secchi, Francesco; Sforza, Chiarella; Baselli, Giuseppe; Sardanelli, Francesco

    2018-04-01

    In this study we proposed a fully automated method for localizing and segmenting the ascending aortic lumen with phase-contrast magnetic resonance imaging (PC-MRI). Twenty-five phase-contrast series were randomly selected out of a large population dataset of patients whose cardiac MRI examination, performed from September 2008 to October 2013, was unremarkable. The local Ethical Committee approved this retrospective study. The ascending aorta was automatically identified on each phase of the cardiac cycle using a priori knowledge of aortic geometry. The frame that maximized the area, eccentricity, and solidity parameters was chosen for unsupervised initialization. Aortic segmentation was performed on each frame using active contouring without edges techniques. The entire algorithm was developed using Matlab R2016b. To validate the proposed method, the manual segmentation performed by a highly experienced operator was used. Dice similarity coefficient, Bland-Altman analysis, and Pearson's correlation coefficient were used as performance metrics. Comparing automated and manual segmentation of the aortic lumen on 714 images, Bland-Altman analysis showed a bias of -6.68mm 2 , a coefficient of repeatability of 91.22mm 2 , a mean area measurement of 581.40mm 2 , and a reproducibility of 85%. Automated and manual segmentation were highly correlated (R=0.98). The Dice similarity coefficient versus the manual reference standard was 94.6±2.1% (mean±standard deviation). A fully automated and robust method for identification and segmentation of ascending aorta on PC-MRI was developed. Its application on patients with a variety of pathologic conditions is advisable. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Three-dimensional electron diffraction as a complementary technique to powder X-ray diffraction for phase identification and structure solution of powders.

    PubMed

    Yun, Yifeng; Zou, Xiaodong; Hovmöller, Sven; Wan, Wei

    2015-03-01

    Phase identification and structure determination are important and widely used techniques in chemistry, physics and materials science. Recently, two methods for automated three-dimensional electron diffraction (ED) data collection, namely automated diffraction tomography (ADT) and rotation electron diffraction (RED), have been developed. Compared with X-ray diffraction (XRD) and two-dimensional zonal ED, three-dimensional ED methods have many advantages in identifying phases and determining unknown structures. Almost complete three-dimensional ED data can be collected using the ADT and RED methods. Since each ED pattern is usually measured off the zone axes by three-dimensional ED methods, dynamic effects are much reduced compared with zonal ED patterns. Data collection is easy and fast, and can start at any arbitrary orientation of the crystal, which facilitates automation. Three-dimensional ED is a powerful technique for structure identification and structure solution from individual nano- or micron-sized particles, while powder X-ray diffraction (PXRD) provides information from all phases present in a sample. ED suffers from dynamic scattering, while PXRD data are kinematic. Three-dimensional ED methods and PXRD are complementary and their combinations are promising for studying multiphase samples and complicated crystal structures. Here, two three-dimensional ED methods, ADT and RED, are described. Examples are given of combinations of three-dimensional ED methods and PXRD for phase identification and structure determination over a large number of different materials, from Ni-Se-O-Cl crystals, zeolites, germanates, metal-organic frameworks and organic compounds to intermetallics with modulated structures. It is shown that three-dimensional ED is now as feasible as X-ray diffraction for phase identification and structure solution, but still needs further development in order to be as accurate as X-ray diffraction. It is expected that three-dimensional ED methods

  17. Automated lattice data generation

    NASA Astrophysics Data System (ADS)

    Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.

    2018-03-01

    The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  18. Rural automated highway systems case study : greater Yellowstone rural ITS corridor : final report

    DOT National Transportation Integrated Search

    1998-01-01

    In cooperation with the National Automated Highway System Consortium (NAHSC), case studies are being conducted on existing transportation corridors to determine the feasibility of AHS. Initial activities by the NAHSC have focused on urbanized areas. ...

  19. 78 FR 66039 - Modification of National Customs Automation Program Test Concerning Automated Commercial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-04

    ... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly... Simplified Entry functionality in the Automated Commercial Environment (ACE). Originally, the test was known...) test concerning Automated Commercial Environment (ACE) Simplified Entry (SE test) functionality is...

  20. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the

  1. Migration monitoring with automated technology

    Treesearch

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  2. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  3. Benefits of smart pumps for automated changeovers of vasoactive drug infusion pumps: a quasi-experimental study.

    PubMed

    Cour, M; Hernu, R; Bénet, T; Robert, J M; Regad, D; Chabert, B; Malatray, A; Conrozier, S; Serra, P; Lassaigne, M; Vanhems, P; Argaud, L

    2013-11-01

    Manual changeover of vasoactive drug infusion pumps (CVIP) frequently lead to haemodynamic instability. Some of the newest smart pumps allow automated CVIP. The aim of this study was to compare automated CVIP with manual 'Quick Change' relays. We performed a prospective, quasi-experimental study, in a university-affiliated intensive care unit (ICU). All adult patients receiving continuous i.v. infusion of vasoactive drugs were included. CVIP were successively performed manually (Phase 1) and automatically (Phase 2) during two 6-month periods. The primary endpoint was the frequency of haemodynamic incidents related to the relays, which were defined as variations of mean arterial pressure >15 mm Hg or heart rate >15 bpm. The secondary endpoints were the nursing time dedicated to relays and the number of interruptions in care because of CVIP. A multivariate mixed effects logistic regression was fitted for analytic analysis. We studied 1329 relays (Phase 1: 681, Phase 2: 648) from 133 patients (Phase 1: 63, Phase 2: 70). Incidents related to CVIP decreased from 137 (20%) in Phase 1 to 73 (11%) in Phase 2 (P<0.001). Automated relays were independently associated with a 49% risk reduction of CVIP-induced incidents (adjusted OR=0.51, 95% confidence interval 0.34-0.77, P=0.001). Time dedicated to the relays and the number of interruptions in care to manage CVIP were also significantly reduced with automated relays vs manual relays (P=0.001). These results demonstrate the benefits of automated CVIP using smart pumps in limiting the frequency of haemodynamic incidents related to relays and in reducing the nursing workload.

  4. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  5. An Experimental Study of the Effects of Automation on Pilot Situational Awareness in the Datalink ATC Environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. John, Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of air traffic control (ATC) clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  6. Comparison of methods for the identification of microorganisms isolated from blood cultures.

    PubMed

    Monteiro, Aydir Cecília Marinho; Fortaleza, Carlos Magno Castelo Branco; Ferreira, Adriano Martison; Cavalcante, Ricardo de Souza; Mondelli, Alessandro Lia; Bagagli, Eduardo; da Cunha, Maria de Lourdes Ribeiro de Souza

    2016-08-05

    Bloodstream infections are responsible for thousands of deaths each year. The rapid identification of the microorganisms causing these infections permits correct therapeutic management that will improve the prognosis of the patient. In an attempt to reduce the time spent on this step, microorganism identification devices have been developed, including the VITEK(®) 2 system, which is currently used in routine clinical microbiology laboratories. This study evaluated the accuracy of the VITEK(®) 2 system in the identification of 400 microorganisms isolated from blood cultures and compared the results to those obtained with conventional phenotypic and genotypic methods. In parallel to the phenotypic identification methods, the DNA of these microorganisms was extracted directly from the blood culture bottles for genotypic identification by the polymerase chain reaction (PCR) and DNA sequencing. The automated VITEK(®) 2 system correctly identified 94.7 % (379/400) of the isolates. The YST and GN cards resulted in 100 % correct identifications of yeasts (15/15) and Gram-negative bacilli (165/165), respectively. The GP card correctly identified 92.6 % (199/215) of Gram-positive cocci, while the ANC card was unable to correctly identify any Gram-positive bacilli (0/5). The performance of the VITEK(®) 2 system was considered acceptable and statistical analysis showed that the system is a suitable option for routine clinical microbiology laboratories to identify different microorganisms.

  7. Comparison of traditional gas chromatography (GC), headspace GC, and the microbial identification library GC system for the identification of Clostridium difficile.

    PubMed Central

    Cundy, K V; Willard, K E; Valeri, L J; Shanholtzer, C J; Singh, J; Peterson, L R

    1991-01-01

    Three gas chromatography (GC) methods were compared for the identification of 52 clinical Clostridium difficile isolates, as well as 17 non-C. difficile Clostridium isolates. Headspace GC and Microbial Identification System (MIS) GC, an automated system which utilizes a software library developed at the Virginia Polytechnic Institute to identify organisms based on the fatty acids extracted from the bacterial cell wall, were compared against the reference method of traditional GC. Headspace GC and MIS were of approximately equivalent accuracy in identifying the 52 C. difficile isolates (52 of 52 versus 51 of 52, respectively). However, 7 of 52 organisms required repeated sample preparation before an identification was achieved by the MIS method. Both systems effectively differentiated C. difficile from non-C. difficile clostridia, although the MIS method correctly identified only 9 of 17. We conclude that the headspace GC system is an accurate method of C. difficile identification, which requires only one-fifth of the sample preparation time of MIS GC and one-half of the sample preparation time of traditional GC. PMID:2007632

  8. Role of home automation in distribution automation and automated meter reading. Tropical report, December 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, K.W.

    1994-12-01

    This is one of a series of topical reports dealing with the strategic, technical, and market development of home automation. Particular emphasis is placed upon identifying those aspects of home automation that will impact the gas industry and gas products. Communication standards, market drivers, key organizations, technical implementation, product opportunities, and market growth projects will all be addressed in this or subsequent reports. These reports will also discuss how the gas industry and gas-fired equipment can use home automation technology to benefit the consumer.

  9. Automation in Distance Learning: An Empirical Study of Unlearning and Academic Identity Change Linked to Automation of Student Messaging within Distance Learning

    ERIC Educational Resources Information Center

    Collins, Hilary; Glover, Hayley; Myers, Fran; Watson, Mor

    2016-01-01

    This paper explores the unlearning and learning undertaken by adjuncts (Associate Lecturers) during the introduction of automated messaging by the university as part replacement of adjunct pastoral support for students. Automated messages were introduced by the University to standardize the student experience in terms of qualification…

  10. Identifying Requirements for Effective Human-Automation Teamwork

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; John O'Hara; Heather D. Medema

    Previous studies have shown that poorly designed human-automation collaboration, such as poorly designed communication protocols, often leads to problems for the human operators, such as: lack of vigilance, complacency, and loss of skills. These problems often lead to suboptimal system performance. To address this situation, a considerable amount of research has been conducted to improve human-automation collaboration and to make automation function better as a “team player.” Much of this research is based on an understanding of what it means to be a good team player from the perspective of a human team. However, the research is often based onmore » a simplified view of human teams and teamwork. In this study, we sought to better understand the capabilities and limitations of automation from the standpoint of human teams. We first examined human teams to identify the principles for effective teamwork. We next reviewed the research on integrating automation agents and human agents into mixed agent teams to identify the limitations of automation agents to conform to teamwork principles. This research resulted in insights that can lead to more effective human-automation collaboration by enabling a more realistic set of requirements to be developed based on the strengths and limitations of all agents.« less

  11. Advanced automation for in-space vehicle processing

    NASA Technical Reports Server (NTRS)

    Sklar, Michael; Wegerif, D.

    1990-01-01

    The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.

  12. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography.

    PubMed

    Liu, Li; Gao, Simon S; Bailey, Steven T; Huang, David; Li, Dengwang; Jia, Yali

    2015-09-01

    Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area.

  13. Identification and antimicrobial susceptibility testing of Staphylococcus vitulinus by the BD phoenix automated microbiology system.

    PubMed

    Cirković, Ivana; Hauschild, Tomasz; Jezek, Petr; Dimitrijević, Vladimir; Vuković, Dragana; Stepanović, Srdjan

    2008-08-01

    This study evaluated the performance of the BD Phoenix system for the identification (ID) and antimicrobial susceptibility testing (AST) of Staphylococcus vitulinus. Of the 10 S. vitulinus isolates included in the study, 2 were obtained from the Czech Collection of Microorganisms, 5 from the environment, 2 from human clinical samples, and 1 from an animal source. The results of conventional biochemical and molecular tests were used for the reference method for ID, while antimicrobial susceptibility testing performed in accordance with Clinical and Laboratory Standards Institute recommendations and PCR for the mecA gene were the reference for AST. Three isolates were incorrectly identified by the BD Phoenix system; one of these was incorrectly identified to the genus level, and two to the species level. The results of AST by the BD Phoenix system were in agreement with those by the reference method used. While the results of susceptibility testing compared favorably, the 70% accuracy of the Phoenix system for identification of this unusual staphylococcal species was not fully satisfactory.

  14. Public Library Automation Report: 1984.

    ERIC Educational Resources Information Center

    Gotanda, Masae

    Data processing was introduced to public libraries in Hawaii in 1973 with a feasibility study which outlined the candidate areas for automation. Since then, the Office of Library Services has automated the order procedures for one of the largest book processing centers for public libraries in the country; created one of the first COM…

  15. Social aspects of automation: Some critical insights

    NASA Astrophysics Data System (ADS)

    Nouzil, Ibrahim; Raza, Ali; Pervaiz, Salman

    2017-09-01

    Sustainable development has been recognized globally as one of the major driving forces towards the current technological innovations. To achieve sustainable development and attain its associated goals, it is very important to properly address its concerns in different aspects of technological innovations. Several industrial sectors have enjoyed productivity and economic gains due to advent of automation technology. It is important to characterize sustainability for the automation technology. Sustainability is key factor that will determine the future of our neighbours in time and it must be tightly wrapped around the double-edged sword of technology. In this study, different impacts of automation have been addressed using the ‘Circles of Sustainability’ approach as a framework, covering economic, political, cultural and ecological aspects and their implications. A systematic literature review of automation technology from its inception is outlined and plotted against its many outcomes covering a broad spectrum. The study is more focused towards the social aspects of the automation technology. The study also reviews literature to analyse the employment deficiency as one end of the social impact spectrum. On the other end of the spectrum, benefits to society through technological advancements, such as the Internet of Things (IoT) coupled with automation are presented.

  16. Automated ultrasound edge-tracking software comparable to established semi-automated reference software for carotid intima-media thickness analysis.

    PubMed

    Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J

    2018-05-01

    Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  17. Development of automated optical verification technologies for control systems

    NASA Astrophysics Data System (ADS)

    Volegov, Peter L.; Podgornov, Vladimir A.

    1999-08-01

    The report considers optical techniques for automated verification of object's identity designed for control system of nuclear objects. There are presented results of experimental researches and results of development of pattern recognition techniques carried out under the ISTC project number 772 with the purpose of identification of unique feature of surface structure of a controlled object and effects of its random treatment. Possibilities of industrial introduction of the developed technologies in frames of USA and Russia laboratories' lab-to-lab cooperation, including development of up-to-date systems for nuclear material control and accounting are examined.

  18. Electronic health record-based cardiac risk assessment and identification of unmet preventive needs.

    PubMed

    Persell, Stephen D; Dunne, Alexis P; Lloyd-Jones, Donald M; Baker, David W

    2009-04-01

    Cardiac risk assessment may not be routinely performed. Electronic health records (EHRs) offer the potential to automate risk estimation. We compared EHR-based assessment with manual chart review to determine the accuracy of automated cardiac risk estimation and determination of candidates for antiplatelet or lipid-lowering interventions. We performed an observational retrospective study of 23,111 adults aged 20 to 79 years, seen in a large urban primary care group practice. Automated assessments classified patients into 4 cardiac risk groups or as unclassifiable and determined candidates for antiplatelet or lipid-lowering interventions based on current guidelines. A blinded physician manually reviewed 100 patients from each risk group and the unclassifiable group. We determined the agreement between full review and automated assessments for cardiac risk estimation and identification of which patients were candidates for interventions. By automated methods, 9.2% of the population were candidates for lipid-lowering interventions, and 8.0% were candidates for antiplatelet medication. Agreement between automated risk classification and manual review was high (kappa = 0.91; 95% confidence interval [CI], 0.88-0.93). Automated methods accurately identified candidates for antiplatelet therapy [sensitivity, 0.81 (95% CI, 0.73-0.89); specificity, 0.98 (95% CI, 0.96-0.99); positive predictive value, 0.86 (95% CI, 0.78-0.94); and negative predictive value, 0.98 (95% CI, 0.97-0.99)] and lipid lowering [sensitivity, 0.92 (95% CI, 0.87-0.96); specificity, 0.98 (95% CI, 0.97-0.99); positive predictive value, 0.94 (95% CI, 0.89-0.99); and negative predictive value, 0.99 (95% CI, 0.98-> or =0.99)]. EHR data can be used to automatically perform cardiovascular risk stratification and identify patients in need of risk-lowering interventions. This could improve detection of high-risk patients whom physicians would otherwise be unaware.

  19. Analysis of ? twinning via automated atomistic post-processing methods

    NASA Astrophysics Data System (ADS)

    Barrett, Christopher D.

    2017-05-01

    ? twinning is the most prominent and most studied twin mode in hexagonal close-packed materials. Many works have been devoted to describing its nucleation, growth and interactions with other defects. Despite this, gaps and disagreements remain in the literature regarding some fundamental aspects of the twinning process. A rigorous understanding of the twinning process is imperative because without it higher scale models of plasticity cannot accurately capture deformation in important materials such as Mg, Ti, Zr and Zn. Motivated by this necessity, we have studied ? twinning using molecular dynamics, focusing on automated processing techniques which can extract mechanistic information generalisable to continuum scale deformation. This demonstrates for the first time the automatic identification of twinning dislocation lines and Burgers vectors, and the elasto-plastic decomposition of the deformation gradient inside and around a twin embryo. These results confirm predictions of most authors regarding the dislocation-based twin growth process, while contradicting others who have argued that ? twin growth stems from a shuffling process with no dislocation line.

  20. Danger! Automation at Work; Report of the State of Illinois Commission on Automation and Technological Progress.

    ERIC Educational Resources Information Center

    Karp, William

    The 74th Illinois General Assembly created the Illinois Commission on Automation and Technological Progress to study and analyze the economic and social effects of automation and other technological changes on industry, commerce, agriculture, education, manpower, and society in Illinois. Commission members visited industrial plants and business…

  1. Automated night/day standoff detection, tracking, and identification of personnel for installation protection

    NASA Astrophysics Data System (ADS)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert

    2013-06-01

    The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.

  2. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    PubMed

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  3. A controlled trial of automated classification of negation from clinical notes

    PubMed Central

    Elkin, Peter L; Brown, Steven H; Bauer, Brent A; Husser, Casey S; Carruth, William; Bergstrom, Larry R; Wahner-Roedler, Dietlind L

    2005-01-01

    Background Identification of negation in electronic health records is essential if we are to understand the computable meaning of the records: Our objective is to compare the accuracy of an automated mechanism for assignment of Negation to clinical concepts within a compositional expression with Human Assigned Negation. Also to perform a failure analysis to identify the causes of poorly identified negation (i.e. Missed Conceptual Representation, Inaccurate Conceptual Representation, Missed Negation, Inaccurate identification of Negation). Methods 41 Clinical Documents (Medical Evaluations; sometimes outside of Mayo these are referred to as History and Physical Examinations) were parsed using the Mayo Vocabulary Server Parsing Engine. SNOMED-CT™ was used to provide concept coverage for the clinical concepts in the record. These records resulted in identification of Concepts and textual clues to Negation. These records were reviewed by an independent medical terminologist, and the results were tallied in a spreadsheet. Where questions on the review arose Internal Medicine Faculty were employed to make a final determination. Results SNOMED-CT was used to provide concept coverage of the 14,792 Concepts in 41 Health Records from John's Hopkins University. Of these, 1,823 Concepts were identified as negative by Human review. The sensitivity (Recall) of the assignment of negation was 97.2% (p < 0.001, Pearson Chi-Square test; when compared to a coin flip). The specificity of assignment of negation was 98.8%. The positive likelihood ratio of the negation was 81. The positive predictive value (Precision) was 91.2% Conclusion Automated assignment of negation to concepts identified in health records based on review of the text is feasible and practical. Lexical assignment of negation is a good test of true Negativity as judged by the high sensitivity, specificity and positive likelihood ratio of the test. SNOMED-CT had overall coverage of 88.7% of the concepts being negated

  4. Semi-automated based ground-truthing GUI for airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Lydic, Rich; Moore, Tim; Trang, Anh; Agarwal, Sanjeev; Tiwari, Spandan

    2005-06-01

    Over the past several years, an enormous amount of airborne imagery consisting of various formats has been collected and will continue into the future to support airborne mine/minefield detection processes, improve algorithm development, and aid in imaging sensor development. The ground-truthing of imagery is a very essential part of the algorithm development process to help validate the detection performance of the sensor and improving algorithm techniques. The GUI (Graphical User Interface) called SemiTruth was developed using Matlab software incorporating signal processing, image processing, and statistics toolboxes to aid in ground-truthing imagery. The semi-automated ground-truthing GUI is made possible with the current data collection method, that is including UTM/GPS (Universal Transverse Mercator/Global Positioning System) coordinate measurements for the mine target and fiducial locations on the given minefield layout to support in identification of the targets on the raw imagery. This semi-automated ground-truthing effort has developed by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division, Airborne Application Branch with some support by the University of Missouri-Rolla.

  5. CERES: A Set of Automated Routines for Echelle Spectra

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordán, Andrés; Espinoza, Néstor

    2017-03-01

    We present the Collection of Elemental Routines for Echelle Spectra (CERES). These routines were developed for the construction of automated pipelines for the reduction, extraction, and analysis of spectra acquired with different instruments, allowing the obtention of homogeneous and standardized results. This modular code includes tools for handling the different steps of the processing: CCD image reductions; identification and tracing of the echelle orders; optimal and rectangular extraction; computation of the wavelength solution; estimation of radial velocities; and rough and fast estimation of the atmospheric parameters. Currently, CERES has been used to develop automated pipelines for 13 different spectrographs, namely CORALIE, FEROS, HARPS, ESPaDOnS, FIES, PUCHEROS, FIDEOS, CAFE, DuPont/Echelle, Magellan/Mike, Keck/HIRES, Magellan/PFS, and APO/ARCES, but the routines can be easily used to deal with data coming from other spectrographs. We show the high precision in radial velocity that CERES achieves for some of these instruments, and we briefly summarize some results that have already been obtained using the CERES pipelines.

  6. The use of machine learning for the identification of peripheral artery disease and future mortality risk.

    PubMed

    Ross, Elsie Gyang; Shah, Nigam H; Dalman, Ronald L; Nead, Kevin T; Cooke, John P; Leeper, Nicholas J

    2016-11-01

    A key aspect of the precision medicine effort is the development of informatics tools that can analyze and interpret "big data" sets in an automated and adaptive fashion while providing accurate and actionable clinical information. The aims of this study were to develop machine learning algorithms for the identification of disease and the prognostication of mortality risk and to determine whether such models perform better than classical statistical analyses. Focusing on peripheral artery disease (PAD), patient data were derived from a prospective, observational study of 1755 patients who presented for elective coronary angiography. We employed multiple supervised machine learning algorithms and used diverse clinical, demographic, imaging, and genomic information in a hypothesis-free manner to build models that could identify patients with PAD and predict future mortality. Comparison was made to standard stepwise linear regression models. Our machine-learned models outperformed stepwise logistic regression models both for the identification of patients with PAD (area under the curve, 0.87 vs 0.76, respectively; P = .03) and for the prediction of future mortality (area under the curve, 0.76 vs 0.65, respectively; P = .10). Both machine-learned models were markedly better calibrated than the stepwise logistic regression models, thus providing more accurate disease and mortality risk estimates. Machine learning approaches can produce more accurate disease classification and prediction models. These tools may prove clinically useful for the automated identification of patients with highly morbid diseases for which aggressive risk factor management can improve outcomes. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  7. A unified framework for evaluating the risk of re-identification of text de-identification tools.

    PubMed

    Scaiano, Martin; Middleton, Grant; Arbuckle, Luk; Kolhatkar, Varada; Peyton, Liam; Dowling, Moira; Gipson, Debbie S; El Emam, Khaled

    2016-10-01

    It has become regular practice to de-identify unstructured medical text for use in research using automatic methods, the goal of which is to remove patient identifying information to minimize re-identification risk. The metrics commonly used to determine if these systems are performing well do not accurately reflect the risk of a patient being re-identified. We therefore developed a framework for measuring the risk of re-identification associated with textual data releases. We apply the proposed evaluation framework to a data set from the University of Michigan Medical School. Our risk assessment results are then compared with those that would be obtained using a typical contemporary micro-average evaluation of recall in order to illustrate the difference between the proposed evaluation framework and the current baseline method. We demonstrate how this framework compares against common measures of the re-identification risk associated with an automated text de-identification process. For the probability of re-identification using our evaluation framework we obtained a mean value for direct identifiers of 0.0074 and a mean value for quasi-identifiers of 0.0022. The 95% confidence interval for these estimates were below the relevant thresholds. The threshold for direct identifier risk was based on previously used approaches in the literature. The threshold for quasi-identifiers was determined based on the context of the data release following commonly used de-identification criteria for structured data. Our framework attempts to correct for poorly distributed evaluation corpora, accounts for the data release context, and avoids the often optimistic assumptions that are made using the more traditional evaluation approach. It therefore provides a more realistic estimate of the true probability of re-identification. This framework should be used as a basis for computing re-identification risk in order to more realistically evaluate future text de-identification tools

  8. Validation of an automated tractography method for the optic radiations as a biomarker of visual acuity in neurofibromatosis-associated optic pathway glioma.

    PubMed

    de Blank, Peter; Fisher, Michael J; Gittleman, Haley; Barnholtz-Sloan, Jill S; Badve, Chaitra; Berman, Jeffrey I

    2018-01-01

    Fractional anisotropy (FA) of the optic radiations has been associated with vision deficit in multiple intrinsic brain pathologies including NF1 associated optic pathway glioma, but hand-drawn regions of interest used in previous tractography methods limit consistency of this potential biomarker. We created an automated method to identify white matter tracts in the optic radiations and compared this method to previously reported hand-drawn tractography. Automated tractography of the optic radiation using probabilistic streamline fiber tracking between the lateral geniculate nucleus of the thalamus and the occipital cortex was compared to the hand-drawn method between regions of interest posterior to Meyer's loop and anterior to tract branching near the calcarine cortex. Reliability was assessed by two independent raters in a sample of 20 healthy child controls. Among 50 children with NF1-associated optic pathway glioma, the association of FA and visual acuity deficit was compared for both tractography methods. Hand-drawn tractography methods required 2.6±0.9min/participant; automated methods were performed in <1min of operator time for all participants. Cronbach's alpha was 0.83 between two independent raters for FA in hand-drawn tractography, but repeated automated tractography resulted in identical FA values (Cronbach's alpha=1). On univariate and multivariate analyses, FA was similarly associated with visual acuity loss using both methods. Receiver operator characteristic curves of both multivariate models demonstrated that both automated and hand-drawn tractography methods were equally able to distinguish normal from abnormal visual acuity. Automated tractography of the optic radiations offers a fast, reliable and consistent method of tract identification that is not reliant on operator time or expertise. This method of tract identification may be useful as DTI is developed as a potential biomarker for visual acuity. Copyright © 2017 Elsevier Inc. All rights

  9. I trust it, but I don't know why: effects of implicit attitudes toward automation on trust in an automated system.

    PubMed

    Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah

    2013-06-01

    This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.

  10. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    ERIC Educational Resources Information Center

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6-month-old infant-mother dyads who each engaged in a face-to-face…

  11. Rapid Automated Quantification of Cerebral Leukoaraiosis on CT Images: A Multicenter Validation Study.

    PubMed

    Chen, Liang; Carlton Jones, Anoma Lalani; Mair, Grant; Patel, Rajiv; Gontsarova, Anastasia; Ganesalingam, Jeban; Math, Nikhil; Dawson, Angela; Aweid, Basaam; Cohen, David; Mehta, Amrish; Wardlaw, Joanna; Rueckert, Daniel; Bentley, Paul

    2018-05-15

    Purpose To validate a random forest method for segmenting cerebral white matter lesions (WMLs) on computed tomographic (CT) images in a multicenter cohort of patients with acute ischemic stroke, by comparison with fluid-attenuated recovery (FLAIR) magnetic resonance (MR) images and expert consensus. Materials and Methods A retrospective sample of 1082 acute ischemic stroke cases was obtained that was composed of unselected patients who were treated with thrombolysis or who were undergoing contemporaneous MR imaging and CT, and a subset of International Stroke Thrombolysis-3 trial participants. Automated delineations of WML on images were validated relative to experts' manual tracings on CT images, and co-registered FLAIR MR imaging, and ratings were performed by using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between automated and expert ratings. Results Automated WML volumes correlated strongly with expert-delineated WML volumes at MR imaging and CT (r 2 = 0.85 and 0.71 respectively; P < .001). Spatial-similarity of automated maps, relative to WML MR imaging, was not significantly different to that of expert WML tracings on CT images. Individual expert WML volumes at CT correlated well with each other (r 2 = 0.85), but varied widely (range, 91% of mean estimate; median estimate, 11 mL; range of estimated ranges, 0.2-68 mL). Agreements (κ) between automated ratings and consensus ratings were 0.60 (Wahlund system) and 0.64 (van Swieten system) compared with agreements between individual pairs of experts of 0.51 and 0.67, respectively, for the two rating systems (P < .01 for Wahlund system comparison of agreements). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (P > .05). Automated preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total automated processing time averaged 109 seconds (range, 79-140 seconds). Conclusion An automated

  12. Application of advanced technology to space automation

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Polhemus, J. T.; Lowrie, J. W.; Hughes, C. A.; Stephens, J. R.; Chang, C. Y.

    1979-01-01

    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits.

  13. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    PubMed

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P < .001). The automated system demonstrated improved capacity for identifying MAEs while guarding against alert fatigue. It also showed promise for reducing patient exposure to potential harm following MAE events.

  14. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  15. Identification of Acinetobacter seifertii isolated from Bolivian hospitals.

    PubMed

    Cerezales, Mónica; Xanthopoulou, Kyriaki; Ertel, Julia; Nemec, Alexandr; Bustamante, Zulema; Seifert, Harald; Gallego, Lucia; Higgins, Paul G

    2018-06-01

    Acinetobacter seifertii is a recently described species that belongs to the Acinetobacter calcoaceticus-Acinetobacter baumannii complex. It has been recovered from clinical samples and is sometimes associated with antimicrobial resistance determinants. We present here the case of three A. seifertii clinical isolates which were initially identified as Acinetobacter sp. by phenotypic methods but no identification at the species level was achieved using semi-automated identification methods. The isolates were further analysed by whole genome sequencing and identified as A. seifertii. Due to the fact that A. seifertii has been isolated from serious infections such as respiratory tract and bloodstream infections, we emphasize the importance of correctly identifying isolates of the genus Acinetobacter at the species level to gain a deeper knowledge of their prevalence and clinical impact.

  16. Automated identification of reference genes based on RNA-seq data.

    PubMed

    Carmona, Rosario; Arroyo, Macarena; Jiménez-Quesada, María José; Seoane, Pedro; Zafra, Adoración; Larrosa, Rafael; Alché, Juan de Dios; Claros, M Gonzalo

    2017-08-18

    Gene expression analyses demand appropriate reference genes (RGs) for normalization, in order to obtain reliable assessments. Ideally, RG expression levels should remain constant in all cells, tissues or experimental conditions under study. Housekeeping genes traditionally fulfilled this requirement, but they have been reported to be less invariant than expected; therefore, RGs should be tested and validated for every particular situation. Microarray data have been used to propose new RGs, but only a limited set of model species and conditions are available; on the contrary, RNA-seq experiments are more and more frequent and constitute a new source of candidate RGs. An automated workflow based on mapped NGS reads has been constructed to obtain highly and invariantly expressed RGs based on a normalized expression in reads per mapped million and the coefficient of variation. This workflow has been tested with Roche/454 reads from reproductive tissues of olive tree (Olea europaea L.), as well as with Illumina paired-end reads from two different accessions of Arabidopsis thaliana and three different human cancers (prostate, small-cell cancer lung and lung adenocarcinoma). Candidate RGs have been proposed for each species and many of them have been previously reported as RGs in literature. Experimental validation of significant RGs in olive tree is provided to support the algorithm. Regardless sequencing technology, number of replicates, and library sizes, when RNA-seq experiments are designed and performed, the same datasets can be analyzed with our workflow to extract suitable RGs for subsequent PCR validation. Moreover, different subset of experimental conditions can provide different suitable RGs.

  17. Multi-scale curvature for automated identification of glaciated mountain landscapes

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David; Schrott, Lothar

    2014-05-01

    Automated morphometric interpretation of digital terrain data based on impartial rule sets holds substantial promise for large dataset processing and objective landscape classification. However, the geomorphological realm presents tremendous complexity in the translation of qualitative descriptions into geomorphometric semantics. Here, the simple, conventional distinction of V-shaped fluvial and U-shaped glacial valleys is analyzed quantitatively using the relation of multi-scale curvature and drainage area. Glacial and fluvial erosion shapes mountain landscapes in a long-recognized and characteristic way. Valleys incised by fluvial processes typically have V-shaped cross-sections with uniform and moderately steep slopes, whereas glacial valleys tend to have U-shaped profiles and topographic gradients steepening with distance from valley floor. On a DEM, thalweg cells are determined by a drainage area cutoff and multiple moving window sizes are used to derive per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. The relation of the curvatures calculated for the user-defined minimum scale and the automatically detected maximum scale is presented as a novel morphometric variable termed Difference of Minimum Curvature (DMC). DMC thresholds determined from typical glacial and fluvial sample catchments are employed to identify quadrats of glaciated and non-glaciated mountain landscapes and the distinctions are validated by field-based geological and geomorphological maps. A first test of the novel algorithm at three study sites in the western United States and a subsequent application to Europe and western Asia demonstrate the transferability of the approach.

  18. Automatic Ammunition Identification Technology Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-01-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army's Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks and permitmore » automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  19. Automation literature: A brief review and analysis

    NASA Technical Reports Server (NTRS)

    Smith, D.; Dieterly, D. L.

    1980-01-01

    Current thought and research positions which may allow for an improved capability to understand the impact of introducing automation to an existing system are established. The orientation was toward the type of studies which may provide some general insight into automation; specifically, the impact of automation in human performance and the resulting system performance. While an extensive number of articles were reviewed, only those that addressed the issue of automation and human performance were selected to be discussed. The literature is organized along two dimensions: time, Pre-1970, Post-1970; and type of approach, Engineering or Behavioral Science. The conclusions reached are not definitive, but do provide the initial stepping stones in an attempt to begin to bridge the concept of automation in a systematic progression.

  20. Human Papillomavirus Genotyping Using an Automated Film-Based Chip Array

    PubMed Central

    Erali, Maria; Pattison, David C.; Wittwer, Carl T.; Petti, Cathy A.

    2009-01-01

    The INFINITI HPV-QUAD assay is a commercially available genotyping platform for human papillomavirus (HPV) that uses multiplex PCR, followed by automated processing for primer extension, hybridization, and detection. The analytical performance of the HPV-QUAD assay was evaluated using liquid cervical cytology specimens, and the results were compared with those results obtained using the digene High-Risk HPV hc2 Test (HC2). The specimen types included Surepath and PreservCyt transport media, as well as residual SurePath and HC2 transport media from the HC2 assay. The overall concordance of positive and negative results following the resolution of indeterminate and intermediate results was 83% among the 197 specimens tested. HC2 positive (+) and HPV-QUAD negative (−) results were noted in 24 specimens that were shown by real-time PCR and sequence analysis to contain no HPV, HPV types that were cross-reactive in the HC2 assay, or low virus levels. Conversely, HC2 (−) and HPV-QUAD (+) results were noted in four specimens and were subsequently attributed to cross-contamination. The most common HPV types to be identified in this study were HPV16, HPV18, HPV52/58, and HPV39/56. We show that the HPV-QUAD assay is a user friendly, automated system for the identification of distinct HPV genotypes. Based on its analytical performance, future studies with this platform are warranted to assess its clinical utility for HPV detection and genotyping. PMID:19644025