Science.gov

Sample records for automated relation extraction

  1. Large Scale Application of Neural Network Based Semantic Role Labeling for Automated Relation Extraction from Biomedical Texts

    PubMed Central

    Barnickel, Thorsten; Weston, Jason; Collobert, Ronan; Mewes, Hans-Werner; Stümpflen, Volker

    2009-01-01

    To reduce the increasing amount of time spent on literature search in the life sciences, several methods for automated knowledge extraction have been developed. Co-occurrence based approaches can deal with large text corpora like MEDLINE in an acceptable time but are not able to extract any specific type of semantic relation. Semantic relation extraction methods based on syntax trees, on the other hand, are computationally expensive and the interpretation of the generated trees is difficult. Several natural language processing (NLP) approaches for the biomedical domain exist focusing specifically on the detection of a limited set of relation types. For systems biology, generic approaches for the detection of a multitude of relation types which in addition are able to process large text corpora are needed but the number of systems meeting both requirements is very limited. We introduce the use of SENNA (“Semantic Extraction using a Neural Network Architecture”), a fast and accurate neural network based Semantic Role Labeling (SRL) program, for the large scale extraction of semantic relations from the biomedical literature. A comparison of processing times of SENNA and other SRL systems or syntactical parsers used in the biomedical domain revealed that SENNA is the fastest Proposition Bank (PropBank) conforming SRL program currently available. 89 million biomedical sentences were tagged with SENNA on a 100 node cluster within three days. The accuracy of the presented relation extraction approach was evaluated on two test sets of annotated sentences resulting in precision/recall values of 0.71/0.43. We show that the accuracy as well as processing speed of the proposed semantic relation extraction approach is sufficient for its large scale application on biomedical text. The proposed approach is highly generalizable regarding the supported relation types and appears to be especially suited for general-purpose, broad-scale text mining systems. The presented approach

  2. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  3. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  4. Automated DNA extraction from pollen in honey.

    PubMed

    Guertler, Patrick; Eicheldinger, Adelina; Muschler, Paul; Goerlich, Ottmar; Busch, Ulrich

    2014-04-15

    In recent years, honey has become subject of DNA analysis due to potential risks evoked by microorganisms, allergens or genetically modified organisms. However, so far, only a few DNA extraction procedures are available, mostly time-consuming and laborious. Therefore, we developed an automated DNA extraction method from pollen in honey based on a CTAB buffer-based DNA extraction using the Maxwell 16 instrument and the Maxwell 16 FFS Nucleic Acid Extraction System, Custom-Kit. We altered several components and extraction parameters and compared the optimised method with a manual CTAB buffer-based DNA isolation method. The automated DNA extraction was faster and resulted in higher DNA yield and sufficient DNA purity. Real-time PCR results obtained after automated DNA extraction are comparable to results after manual DNA extraction. No PCR inhibition was observed. The applicability of this method was further successfully confirmed by analysis of different routine honey samples. PMID:24295710

  5. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525

  6. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  7. Automated building extraction using dense elevation matrices

    NASA Astrophysics Data System (ADS)

    Bendett, A. A.; Rauhala, Urho A.; Pearson, James J.

    1997-02-01

    The identification and measurement of buildings in imagery is important to a number of applications including cartography, modeling and simulation, and weapon targeting. Extracting large numbers of buildings manually can be time- consuming and expensive, so the automation of the process is highly desirable. This paper describes and demonstrates such an automated process for extracting rectilinear buildings from stereo imagery. The first step is the generation of a dense elevation matrix registered to the imagery. In the examples shown, this was accomplished using global minimum residual matching (GMRM). GMRM automatically removes y- parallax from the stereo imagery and produces a dense matrix of x-parallax values which are proportional to the local elevation, and, of course, registered to the imagery. The second step is to form a joint probability distribution of the image gray levels and the corresponding height values from the elevation matrix. Based on the peaks of that distribution, the area of interest is segmented into feature and non-feature areas. The feature areas are further refined using length, width and height constraints to yield promising building hypotheses with their corresponding vertices. The gray shade image is used in the third step to verify the hypotheses and to determine precise edge locations corresponding to the approximate vertices and satisfying appropriate orthogonality constraints. Examples of successful application of this process to imagery are presented, and extensions involving the use of dense elevation matrices from other sources are possible.

  8. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  9. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  11. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  12. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  13. Automated extraction of free-text from pathology reports.

    PubMed

    Currie, Anne-Marie; Fricke, Travis; Gawne, Agnes; Johnston, Ric; Liu, John; Stein, Barbara

    2006-01-01

    Manually populating a cancer registry from free-text pathology reports is labor intensive and costly. This poster describes a method of automated text extraction to improve the efficiency of this process and reduce cost. FineTooth, a software company, provides an automated service to the Fred Hutchinson Cancer Research Center (FHCRC) to help populate their breast and prostate cancer clinical research database by electronically abstracting over 80 data fields from pathology text reports. PMID:17238518

  14. Automated knowledge extraction from MEDLINE citations.

    PubMed

    Mendonça, E A; Cimino, J J

    2000-01-01

    As part of preliminary studies for the development of a digital library, we have studied the possibility of using the co-occurrence of MeSH terms in MEDLINE citations associated with the search strategies optimal for evidence-based medicine to automate construction of a knowledge base. We use the UMLS semantic types in order to analyze search results to determine which semantic types are most relevant for different types of questions (etiology, diagnosis, therapy, and prognosis). The automated process generated a large amount of information. Seven to eight percent of the semantic pairs generated in each clinical task group co-occur significantly more often than can be accounted for by chance. A pilot study showed good specificity and sensitivity for the intended purposes of this project in all groups. PMID:11079949

  15. Text Mining approaches for automated literature knowledge extraction and representation.

    PubMed

    Nuzzo, Angelo; Mulas, Francesca; Gabetta, Matteo; Arbustini, Eloisa; Zupan, Blaz; Larizza, Cristiana; Bellazzi, Riccardo

    2010-01-01

    Due to the overwhelming volume of published scientific papers, information tools for automated literature analysis are essential to support current biomedical research. We have developed a knowledge extraction tool to help researcher in discovering useful information which can support their reasoning process. The tool is composed of a search engine based on Text Mining and Natural Language Processing techniques, and an analysis module which process the search results in order to build annotation similarity networks. We tested our approach on the available knowledge about the genetic mechanism of cardiac diseases, where the target is to find both known and possible hypothetical relations between specific candidate genes and the trait of interest. We show that the system i) is able to effectively retrieve medical concepts and genes and ii) plays a relevant role assisting researchers in the formulation and evaluation of novel literature-based hypotheses. PMID:20841825

  16. Automated sea floor extraction from underwater video

    NASA Astrophysics Data System (ADS)

    Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike

    2016-05-01

    Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.

  17. Automated Extraction of Family History Information from Clinical Notes

    PubMed Central

    Bill, Robert; Pakhomov, Serguei; Chen, Elizabeth S.; Winden, Tamara J.; Carter, Elizabeth W.; Melton, Genevieve B.

    2014-01-01

    Despite increased functionality for obtaining family history in a structured format within electronic health record systems, clinical notes often still contain this information. We developed and evaluated an Unstructured Information Management Application (UIMA)-based natural language processing (NLP) module for automated extraction of family history information with functionality for identifying statements, observations (e.g., disease or procedure), relative or side of family with attributes (i.e., vital status, age of diagnosis, certainty, and negation), and predication (“indicator phrases”), the latter of which was used to establish relationships between observations and family member. The family history NLP system demonstrated F-scores of 66.9, 92.4, 82.9, 57.3, 97.7, and 61.9 for detection of family history statements, family member identification, observation identification, negation identification, vital status, and overall extraction of the predications between family members and observations, respectively. While the system performed well for detection of family history statements and predication constituents, further work is needed to improve extraction of certainty and temporal modifications. PMID:25954443

  18. Docking automation related technology, Phase 2 report

    SciTech Connect

    Jatko, W.B.; Goddard, J.S.; Gleason, S.S.; Ferrell, R.K.

    1995-04-01

    This report generalizes the progress for Phase II of the Docking Automated Related Technologies task component within the Modular Artillery Ammunition Delivery System (MAADS) technology demonstrator of the Future Armored Resupply Vehicle (FARV) project. This report also covers development activity at Oak Ridge National Laboratory (ORNL) during the period from January to July 1994.

  19. Automated vasculature extraction from placenta images

    NASA Astrophysics Data System (ADS)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  20. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  1. Automated Boundary-Extraction and Region-Growing Techniques Applied to Solar Magnetograms

    NASA Technical Reports Server (NTRS)

    McAteer, R. T. James; Gallagher, Peter; Ireland, Jack; Young, C Alex

    2005-01-01

    We present an automated approach to active region extraction from full disc MDI longitudinal magnetograms. This uses a region-growing technique in conjunction with boundary-extraction to define a number of enclosed contours as belonging to separate regions of magnetic significance on the solar disc. This provides an objective definition of active regions and areas of plage on the Sun. A number of parameters relating to the flare-potential of each region is discussed.

  2. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management. PMID:21040869

  3. Improved Automated Seismic Event Extraction Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Mackey, L.; Kleiner, A.; Jordan, M. I.

    2009-12-01

    Like many organizations engaged in seismic monitoring, the Preparatory Commission for the Comprehensive Test Ban Treaty Organization collects and processes seismic data from a large network of sensors. This data is continuously transmitted to a central data center, and bulletins of seismic events are automatically extracted. However, as for many such automated systems at present, the inaccuracy of this extraction necessitates substantial human analyst review effort. A significant opportunity for improvement thus lies in the fact that these systems currently fail to fully utilize the valuable repository of historical data provided by prior analyst reviews. In this work, we present the results of the application of machine learning approaches to several fundamental sub-tasks in seismic event extraction. These methods share as a common theme the use of historical analyst-reviewed bulletins as ground truth from which they extract relevant patterns to accomplish the desired goals. For instance, we demonstrate the effectiveness of classification and ranking methods for the identification of false events -- that is, those which will be invalidated and discarded by analysts -- in automated bulletins. We also show gains in the accuracy of seismic phase identification via the use of classification techniques to automatically assign seismic phase labels to station detections. Furthermore, we examine the potential of historical association data to inform the direct association of new signal detections with their corresponding seismic events. Empirical results are based upon parametric historical seismic detection and event data received from the Preparatory Commission for the Comprehensive Test Ban Treaty Organization.

  4. Automated RNA Extraction and Purification for Multiplexed Pathogen Detection

    SciTech Connect

    Bruzek, Amy K.; Bruckner-Lea, Cindy J.

    2005-01-01

    Pathogen detection has become an extremely important part of our nation?s defense in this post 9/11 world where the threat of bioterrorist attacks are a grim reality. When a biological attack takes place, response time is critical. The faster the biothreat is assessed, the faster countermeasures can be put in place to protect the health of the general public. Today some of the most widely used methods for detecting pathogens are either time consuming or not reliable [1]. Therefore, a method that can detect multiple pathogens that is inherently reliable, rapid, automated and field portable is needed. To that end, we are developing automated fluidics systems for the recovery, cleanup, and direct labeling of community RNA from suspect environmental samples. The advantage of using RNA for detection is that there are multiple copies of mRNA in a cell, whereas there are normally only one or two copies of DNA [2]. Because there are multiple copies of mRNA in a cell for highly expressed genes, no amplification of the genetic material may be necessary, and thus rapid and direct detection of only a few cells may be possible [3]. This report outlines the development of both manual and automated methods for the extraction and purification of mRNA. The methods were evaluated using cell lysates from Escherichia coli 25922 (nonpathogenic), Salmonella typhimurium (pathogenic), and Shigella spp (pathogenic). Automated RNA purification was achieved using a custom sequential injection fluidics system consisting of a syringe pump, a multi-port valve and a magnetic capture cell. mRNA was captured using silica coated superparamagnetic beads that were trapped in the tubing by a rare earth magnet. RNA was detected by gel electrophoresis and/or by hybridization of the RNA to microarrays. The versatility of the fluidics systems and the ability to automate these systems allows for quick and easy processing of samples and eliminates the need for an experienced operator.

  5. Arduino-based automation of a DNA extraction system.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile. PMID:26409535

  6. Automated tools for phenotype extraction from medical records.

    PubMed

    Yetisgen-Yildiz, Meliha; Bejan, Cosmin A; Vanderwende, Lucy; Xia, Fei; Evans, Heather L; Wurfel, Mark M

    2013-01-01

    Clinical research studying critical illness phenotypes relies on the identification of clinical syndromes defined by consensus definitions. Historically, identifying phenotypes has required manual chart review, a time and resource intensive process. The overall research goal of C ritical I llness PH enotype E xt R action (deCIPHER) project is to develop automated approaches based on natural language processing and machine learning that accurately identify phenotypes from EMR. We chose pneumonia as our first critical illness phenotype and conducted preliminary experiments to explore the problem space. In this abstract, we outline the tools we built for processing clinical records, present our preliminary findings for pneumonia extraction, and describe future steps. PMID:24303281

  7. Automated labeling of bibliographic data extracted from biomedical online journals

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2003-01-01

    A prototype system has been designed to automate the extraction of bibliographic data (e.g., article title, authors, abstract, affiliation and others) from online biomedical journals to populate the National Library of Medicine"s MEDLINE database. This paper describes a key module in this system: the labeling module that employs statistics and fuzzy rule-based algorithms to identify segmented zones in an article"s HTML pages as specific bibliographic data. Results from experiments conducted with 1,149 medical articles from forty-seven journal issues are presented.

  8. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  9. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  10. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  11. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  12. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  13. Automated Extraction of Substance Use Information from Clinical Texts

    PubMed Central

    Wang, Yan; Chen, Elizabeth S.; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W.; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B.

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes. PMID:26958312

  14. Automated Dsm Extraction from Uav Images and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2015-08-01

    As technology evolves, unmanned aerial vehicles (UAVs) imagery is being used from simple applications such as image acquisition to complicated applications such as 3D spatial information extraction. Spatial information is usually provided in the form of a DSM or point cloud. It is important to generate very dense tie points automatically from stereo images. In this paper, we tried to apply stereo image-based matching technique developed for satellite/aerial images to UAV images, propose processing steps for automated DSM generation and to analyse the possibility of DSM generation. For DSM generation from UAV images, firstly, exterior orientation parameters (EOPs) for each dataset were adjusted. Secondly, optimum matching pairs were determined. Thirdly, stereo image matching was performed with each pair. Developed matching algorithm is based on grey-level correlation on pixels applied along epipolar lines. Finally, the extracted match results were united with one result and the final DSM was made. Generated DSM was compared with a reference DSM from Lidar. Overall accuracy was 1.5 m in NMAD. However, several problems have to be solved in future, including obtaining precise EOPs, handling occlusion and image blurring problems. More effective interpolation technique needs to be developed in the future.

  15. Automated Tract Extraction via Atlas Based Adaptive Clustering

    PubMed Central

    Tunç, Birkan; Parker, William A.; Ingalhalikar, Madhura; Verma, Ragini

    2014-01-01

    Advancements in imaging protocols such as the high angular resolution diffusion-weighted imaging (HARDI) and in tractography techniques are expected to cause an increase in the tract-based analyses. Statistical analyses over white matter tracts can contribute greatly towards understanding structural mechanisms of the brain since tracts are representative of the connectivity pathways. The main challenge with tract-based studies is the extraction of the tracts of interest in a consistent and comparable manner over a large group of individuals without drawing the inclusion and exclusion regions of interest. In this work, we design a framework for automated extraction of white matter tracts. The framework introduces three main components, namely a connectivity based fiber representation, a fiber clustering atlas, and a clustering approach called Adaptive Clustering. The fiber representation relies on the connectivity signatures of fibers to establish an easy correspondence between different subjects. A group-wise clustering of these fibers that are represented by the connectivity signatures is then used to generate a fiber bundle atlas. Finally, Adaptive Clustering incorporates the previously generated clustering atlas as a prior, to cluster the fibers of a new subject automatically. Experiments on the HARDI scans of healthy individuals acquired repeatedly, demonstrate the applicability, the reliability and the repeatability of our approach in extracting white matter tracts. By alleviating the seed region selection or the inclusion/exclusion ROI drawing requirements that are usually handled by trained radiologists, the proposed framework expands the range of possible clinical applications and establishes the ability to perform tract-based analyses with large samples. PMID:25134977

  16. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library

    PubMed Central

    Leung, Kelvin K.; Barnes, Josephine; Modat, Marc; Ridgway, Gerard R.; Bartlett, Jonathan W.; Fox, Nick C.; Ourselin, Sébastien

    2011-01-01

    Whole brain extraction is an important pre-processing step in neuro-image analysis. Manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies, meaning that automated techniques are preferable. The accuracy and robustness of automated methods are crucial because human expertise may be required to correct any sub-optimal results, which can be very time consuming. We compared the accuracy of four automated brain extraction methods: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), Hybrid Watershed Algorithm (HWA) and a Multi-Atlas Propagation and Segmentation (MAPS) technique we have previously developed for hippocampal segmentation. The four methods were applied to extract whole brains from 682 1.5T and 157 3T T1-weighted MR baseline images from the Alzheimer’s Disease Neuroimaging Initiative database. Semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results. The median Jaccard index of MAPS was higher than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests), and the 1st-99th centile range of the Jaccard index of MAPS was smaller than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests). HWA and MAPS were found to be best at including all brain tissues (median false negative rate ≤ 0.010% for 1.5T scans and ≤ 0.019% for 3T scans, both methods). The median Jaccard index of MAPS were similar in both 1.5T and 3T scans, whereas those of BET, BSE and HWA were higher in 1.5T scans than 3T scans (p < 0.05, all tests). We found that the diagnostic group had a small effect on the median Jaccard index of all four methods. In conclusion, MAPS had relatively high accuracy and low variability compared to HWA, BET and BSE in MR scans with and without atrophy. PMID:21195780

  17. ACIS Extract: A Chandra/ACIS Tool for Automated Point Source Extraction and Spectral Fitting

    NASA Astrophysics Data System (ADS)

    Townsley, L.; Broos, P.; Bauer, F.; Getman, K.

    2003-03-01

    ACIS Extract (AE) is an IDL program that assists the observer in performing the many tasks involved in analyzing the spectra of large numbers of point sources observed with the ACIS instrument on Chandra. Notably, all tasks are performed in a context that may include multiple observations of the field. Features of AE and its several accessory tools include refining the accuracy of source positions, defining extraction regions based on the PSF of each source in each observation, generating single-observation and composite ARFs and RMFs, applying energy-dependent aperture corrections to the ARFs, computing light curves and K-S tests for source variability, automated broad-band photometry, automated spectral fitting and review of fitting results, and compilation of results into LaTeX tables. A variety of interactive plots are produced showing various source properties across the catalog. This poster details the capabilities of the package and shows example output. The code and a detailed users' manual are available to the community at http://www.astro.psu.edu/xray/docs/TARA/ae_users_guide.html. Support for this effort was provided by NASA contract NAS8-38252 to Gordon Garmire, the ACIS Principal Investigator.

  18. A COMPARISON OF AUTOMATED AND TRADITIONAL METHODS FOR THE EXTRACTION OF ARSENICALS FROM FISH

    EPA Science Inventory

    An automated extractor employing accelerated solvent extraction (ASE) has been compared with a traditional sonication method of extraction for the extraction of arsenicals from fish tissue. Four different species of fish and a standard reference material, DORM-2, were subjected t...

  19. AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...

  20. The Overview of Entity Relation Extraction Methods

    NASA Astrophysics Data System (ADS)

    Cheng, Xian-Yi; Chen, Xiao-Hong; Hua, Jin

    The Information extraction can be defined as the task of extracting information of specified events or facts, and then stored in a database for the users' querying. Only with the correct relationship between the various entities, the database can be correctly store in. Entity relation extraction becomes a key technology of information Extraction system. In this paper, we analyze the status of entity relation extraction method; propose several problems for this field to be solved.

  1. Towards automated support for extraction of reusable components

    NASA Technical Reports Server (NTRS)

    Abd-El-hafiz, S. K.; Basili, Victor R.; Caldiera, Gianluigi

    1992-01-01

    A cost effective introduction of software reuse techniques requires the reuse of existing software developed in many cases without aiming at reusability. This paper discusses the problems related to the analysis and reengineering of existing software in order to reuse it. We introduce a process model for component extraction and focus on the problem of analyzing and qualifying software components which are candidates for reuse. A prototype tool for supporting the extraction of reusable components is presented. One of the components of this tool aids in understanding programs and is based on the functional model of correctness. It can assist software engineers in the process of finding correct formal specifications for programs. A detailed description of this component and an example to demonstrate a possible operational scenario are given.

  2. Towards automated support for extraction of reusable components

    NASA Technical Reports Server (NTRS)

    Abd-El-hafiz, S. K.; Basili, V. R.; Caldier, G.

    1991-01-01

    A cost effective introduction of software reuse techniques requires the reuse of existing software developed in many cases without aiming at reusability. This paper discusses the problems related to the analysis and reengineering of existing software in order to reuse it. We introduce a process model for component extraction and focus on the problem of analyzing and qualifying software components which are candidates for reuse. A prototype tool for supporting the extraction of reusable components is presented. One of the components of this tool aids in understanding programs and is based on the functional model of correctness. It can assist software engineers in the process of finding correct formal specifications for programs. A detailed description of this component and an example to demonstrate a possible operational scenario are given.

  3. Application and evaluation of automated methods to extract neuroanatomical connectivity statements from free text

    PubMed Central

    Pavlidis, Paul

    2012-01-01

    Motivation: Automated annotation of neuroanatomical connectivity statements from the neuroscience literature would enable accessible and large-scale connectivity resources. Unfortunately, the connectivity findings are not formally encoded and occur as natural language text. This hinders aggregation, indexing, searching and integration of the reports. We annotated a set of 1377 abstracts for connectivity relations to facilitate automated extraction of connectivity relationships from neuroscience literature. We tested several baseline measures based on co-occurrence and lexical rules. We compare results from seven machine learning methods adapted from the protein interaction extraction domain that employ part-of-speech, dependency and syntax features. Results: Co-occurrence based methods provided high recall with weak precision. The shallow linguistic kernel recalled 70.1% of the sentence-level connectivity statements at 50.3% precision. Owing to its speed and simplicity, we applied the shallow linguistic kernel to a large set of new abstracts. To evaluate the results, we compared 2688 extracted connections with the Brain Architecture Management System (an existing database of rat connectivity). The extracted connections were connected in the Brain Architecture Management System at a rate of 63.5%, compared with 51.1% for co-occurring brain region pairs. We found that precision increases with the recency and frequency of the extracted relationships. Availability and implementation: The source code, evaluations, documentation and other supplementary materials are available at http://www.chibi.ubc.ca/WhiteText. Contact: paul@chibi.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22954628

  4. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  5. Model-based automated extraction of microtubules from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    We propose a model-based automated approach to extracting microtubules from noisy electron tomography volume. Our approach consists of volume enhancement, microtubule localization, and boundary segmentation to exploit the unique geometric and photometric properties of microtubules. The enhancement starts with an anisotropic invariant wavelet transform to enhance the microtubules globally, followed by a three-dimensional (3-D) tube-enhancing filter based on Weingarten matrix to further accentuate the tubular structures locally. The enhancement ends with a modified coherence-enhancing diffusion to complete the interruptions along the microtubules. The microtubules are then localized with a centerline extraction algorithm adapted for tubular objects. To perform segmentation, we novelly modify and extend active shape model method. We first use 3-D local surface enhancement to characterize the microtubule boundary and improve shape searching by relating the boundary strength with the weight matrix of the searching error. We then integrate the active shape model with Kalman filtering to utilize the longitudinal smoothness along the microtubules. The segmentation improved in this way is robust against missing boundaries and outliers that are often present in the tomography volume. Experimental results demonstrate that our automated method produces results close to those by manual process and uses only a fraction of the time of the latter. PMID:16871731

  6. Automating Nuclear-Safety-Related SQA Procedures with Custom Applications

    SciTech Connect

    Freels, James D.

    2016-01-01

    Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.

  7. Biomedical Relation Extraction: From Binary to Complex

    PubMed Central

    Zhong, Dayou

    2014-01-01

    Biomedical relation extraction aims to uncover high-quality relations from life science literature with high accuracy and efficiency. Early biomedical relation extraction tasks focused on capturing binary relations, such as protein-protein interactions, which are crucial for virtually every process in a living cell. Information about these interactions provides the foundations for new therapeutic approaches. In recent years, more interests have been shifted to the extraction of complex relations such as biomolecular events. While complex relations go beyond binary relations and involve more than two arguments, they might also take another relation as an argument. In the paper, we conduct a thorough survey on the research in biomedical relation extraction. We first present a general framework for biomedical relation extraction and then discuss the approaches proposed for binary and complex relation extraction with focus on the latter since it is a much more difficult task compared to binary relation extraction. Finally, we discuss challenges that we are facing with complex relation extraction and outline possible solutions and future directions. PMID:25214883

  8. Disposable and removable nucleic acid extraction and purification cartridges for automated flow-through systems

    DOEpatents

    Regan, John Frederick

    2014-09-09

    Removable cartridges are used on automated flow-through systems for the purpose of extracting and purifying genetic material from complex matrices. Different types of cartridges are paired with specific automated protocols to concentrate, extract, and purifying pathogenic or human genetic material. Their flow-through nature allows large quantities sample to be processed. Matrices may be filtered using size exclusion and/or affinity filters to concentrate the pathogen of interest. Lysed material is ultimately passed through a filter to remove the insoluble material before the soluble genetic material is delivered past a silica-like membrane that binds the genetic material, where it is washed, dried, and eluted. Cartridges are inserted into the housing areas of flow-through automated instruments, which are equipped with sensors to ensure proper placement and usage of the cartridges. Properly inserted cartridges create fluid- and air-tight seals with the flow lines of an automated instrument.

  9. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  10. Automated serial extraction of DNA and RNA from biobanked tissue specimens

    PubMed Central

    2013-01-01

    Background With increasing biobanking of biological samples, methods for large scale extraction of nucleic acids are in demand. The lack of such techniques designed for extraction from tissues results in a bottleneck in downstream genetic analyses, particularly in the field of cancer research. We have developed an automated procedure for tissue homogenization and extraction of DNA and RNA into separate fractions from the same frozen tissue specimen. A purpose developed magnetic bead based technology to serially extract both DNA and RNA from tissues was automated on a Tecan Freedom Evo robotic workstation. Results 864 fresh-frozen human normal and tumor tissue samples from breast and colon were serially extracted in batches of 96 samples. Yields and quality of DNA and RNA were determined. The DNA was evaluated in several downstream analyses, and the stability of RNA was determined after 9 months of storage. The extracted DNA performed consistently well in processes including PCR-based STR analysis, HaloPlex selection and deep sequencing on an Illumina platform, and gene copy number analysis using microarrays. The RNA has performed well in RT-PCR analyses and maintains integrity upon storage. Conclusions The technology described here enables the processing of many tissue samples simultaneously with a high quality product and a time and cost reduction for the user. This reduces the sample preparation bottleneck in cancer research. The open automation format also enables integration with upstream and downstream devices for automated sample quantitation or storage. PMID:23957867

  11. Automated Algorithm for Extraction of Wetlands from IRS Resourcesat Liss III Data

    NASA Astrophysics Data System (ADS)

    Subramaniam, S.; Saxena, M.

    2011-09-01

    Wetlands play significant role in maintaining the ecological balance of both biotic and abiotic life in coastal and inland environments. Hence, understanding of their occurrence, spatial extent of change in wetland environment is very important and can be monitored using satellite remote sensing technique. The extraction of wetland features using remote sensing has so far been carried out using visual/ hybrid digital analysis techniques, which is time consuming. To monitor the wetland and their features at National/ State level, there is a need for the development of automated technique for the extraction of wetland features. A knowledge based algorithm has been developed using hierarchical decision tree approach for automated extraction of wetland features such as surface water spread, wet area, turbidity and wet vegetation including aquatic for pre and post monsoon period. The results obtained for Chhattisgarh, India using the automated technique has been found to be satisfactory, when compared with hybrid digital/visual analysis technique.

  12. The evaluation of a concept for a Canadian-made automated multipurpose materials extraction facility

    NASA Astrophysics Data System (ADS)

    Kleinberg, H.

    Long-term habitation of space will eventually require use of off-Earth resources to reduce long-term program costs and risks to personnel and equipment due to launch from Earth. Extraction of oxygen from lunar soil is a prime example. Processes currently under study for such activities focus on the extraction of only one element / chemical from one type of soil on one world, and they produce large amounts of waste material. This paper presents the results of an examination by Spar Aerospace of a plasma separation concept as part of a materials extraction facility that might be used in space. Such a process has the far-reaching potential for extracting any or all of the elements available in soil samples, extraction of oxygen from lunar soil being the near-term application. Plasma separation has the potential for a 100 percent yield of extracted elements from input samples, and the versatility to be used on many non-terrestrial sites for the extraction of available elemental resources. The development of new materials extraction processes for each world would thus be eliminated. Such a facility could also reduce the generation of waste products by decomposing soil samples into pure, stable elements. Robotics, automation, and a plasma separation facility could be used to gather, prepare, process, separate, collect and ship the available chemical elements. The following topics are discussed: automated soil-gathering using robotics; automated soil pre-processing; plasma dissociation and separation of soil, and collection of sorted elements in an automated process; containment of gases, storage of pure elements, metals; and automated shipment of materials to a manned base, or pick-up site.

  13. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    PubMed

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-01

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed. PMID:27062720

  14. Comparison of an automated nucleic acid extraction system with the column-based procedure

    PubMed Central

    Hinz, Rebecca; Hagen, Ralf Matthias

    2015-01-01

    Here, we assessed the extraction efficiency of a deployable bench-top nucleic acid extractor EZ1 in comparison to the column-based approach with complex sample matrices. A total of 48 EDTA blood samples and 81 stool samples were extracted by EZ1 automated extraction and the column-based QIAamp DNA Mini Kit. Blood sample extractions were assessed by two real-time malaria PCRs, while stool samples were analyzed by six multiplex real-time PCR assays targeting bacterial, viral, and parasitic stool pathogens. Inhibition control PCR testing was performed as well. In total, 147 concordant and 13 discordant pathogen-specific PCR results were obtained. The latter comprised 11 positive results after column-based extraction only and two positive results after EZ1 extraction only. EZ1 extraction showed a higher frequency of inhibition. This phenomenon was, however, inconsistent for the different PCR schemes. In case of concordant PCR results, relevant differences of cycle threshold numbers for the compared extraction schemes were not observed. Switches from well-established column-based extraction to extraction with the automated EZ1 system do not lead to a relevantly reduced yield of target DNA when complex sample matrices are used. If sample inhibition is observed, column-based extraction from another sample aliquot may be considered. PMID:25883797

  15. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  16. Automated microfluidic DNA/RNA extraction with both disposable and reusable components

    NASA Astrophysics Data System (ADS)

    Kim, Jungkyu; Johnson, Michael; Hill, Parker; Sonkul, Rahul S.; Kim, Jongwon; Gale, Bruce K.

    2012-01-01

    An automated microfluidic nucleic extraction system was fabricated with a multilayer polydimethylsiloxane (PDMS) structure that consists of sample wells, microvalves, a micropump and a disposable microfluidic silica cartridge. Both the microvalves and micropump structures were fabricated in a single layer and are operated pneumatically using a 100 µm PDMS membrane. To fabricate the disposable microfluidic silica cartridge, two-cavity structures were made in a PDMS replica to fit the stacked silica membranes. A handheld controller for the microvalves and pumps was developed to enable system automation. With purified ribonucleic acid (RNA), whole blood and E. coli samples, the automated microfluidic nucleic acid extraction system was validated with a guanidine-based solid phase extraction procedure. An extraction efficiency of ~90% for deoxyribonucleic acid (DNA) and ~54% for RNA was obtained in 12 min from whole blood and E. coli samples, respectively. In addition, the same quantity and quality of extracted DNA was confirmed by polymerase chain reaction (PCR) amplification. The PCR also presented the appropriate amplification and melting profiles. Automated, programmable fluid control and physical separation of the reusable components and the disposable components significantly decrease the assay time and manufacturing cost and increase the flexibility and compatibility of the system with downstream components.

  17. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  18. Chemical documents: machine understanding and automated information extraction.

    PubMed

    Townsend, Joe A; Adams, Sam E; Waudby, Christopher A; de Souza, Vanessa K; Goodman, Jonathan M; Murray-Rust, Peter

    2004-11-21

    Automatically extracting chemical information from documents is a challenging task, but an essential one for dealing with the vast quantity of data that is available. The task is least difficult for structured documents, such as chemistry department web pages or the output of computational chemistry programs, but requires increasingly sophisticated approaches for less structured documents, such as chemical papers. The identification of key units of information, such as chemical names, makes the extraction of useful information from unstructured documents possible. PMID:15534707

  19. Dynamic electromembrane extraction: Automated movement of donor and acceptor phases to improve extraction efficiency.

    PubMed

    Asl, Yousef Abdossalami; Yamini, Yadollah; Seidi, Shahram; Amanzadeh, Hatam

    2015-11-01

    In the present research, dynamic electromembrane extraction (DEME) was introduced for the first time for extraction and determination of ionizable species from different biological matrices. The setup proposed for DEME provides an efficient, stable, and reproducible method to increase extraction efficiency. This setup consists of a piece of hollow fiber mounted inside a glass flow cell by means of two plastics connector tubes. In this dynamic system, an organic solvent is impregnated into the pores of hollow fiber as supported liquid membrane (SLM); an aqueous acceptor solution is repeatedly pumped into the lumen of hollow fiber by a syringe pump whereas a peristaltic pump is used to move sample solution around the mounted hollow fiber into the flow cell. Two platinum electrodes connected to a power supply are used during extractions which are located into the lumen of the hollow fiber and glass flow cell, respectively. The method was applied for extraction of amitriptyline (AMI) and nortriptyline (NOR) as model analytes from biological fluids. Effective parameters on DEME of the model analytes were investigated and optimized. Under optimized conditions, the calibration curves were linear in the range of 2.0-100μgL(-1) with coefficient of determination (r(2)) more than 0.9902 for both of the analytes. The relative standard deviations (RSD %) were less than 8.4% based on four replicate measurements. LODs less than 1.0μgL(-1) were obtained for both AMI and NOR. The preconcentration factors higher than 83-fold were obtained for the extraction of AMI and NOR in various biological samples. PMID:26455283

  20. Visual Routines for Extracting Magnitude Relations

    ERIC Educational Resources Information Center

    Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.

    2016-01-01

    Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…

  1. Prescription Extraction from Clinical Notes: Towards Automating EMR Medication Reconciliation

    PubMed Central

    Wang, Yajuan; Steinhubl, Steven R.; Defilippi, Chrisopher; Ng, Kenney; Ebadollahi, Shahram; Stewart, Walter F.; Byrd, Roy J

    2015-01-01

    Medication in for ma lion is one of [he most important clinical data types in electronic medical records (EMR) This study developed an NLP application (PredMED) to extract full prescriptions and their relevant components from a large corpus of unstructured ambulatory office visit clinical notes and the corresponding structured medication reconciliation (MED REC) data in the EMR. PredMED achieved an 84.4% F-score on office visit encounter notes and 95.0% on MED„REC data, outperforming two available medication extraction systems. To assess the potential for using automatically extracted prescriptions in the medication reconciliation task, we manually analyzed discrepancies between prescriptions found in clinical encounter notes and in matching MED_REC data for sample patient encounters. PMID:26306266

  2. Discovering Indicators of Successful Collaboration Using Tense: Automated Extraction of Patterns in Discourse

    ERIC Educational Resources Information Center

    Thompson, Kate; Kennedy-Clark, Shannon; Wheeler, Penny; Kelly, Nick

    2014-01-01

    This paper describes a technique for locating indicators of success within the data collected from complex learning environments, proposing an application of e-research to access learner processes and measure and track group progress. The technique combines automated extraction of tense and modality via parts-of-speech tagging with a visualisation…

  3. Automated concept-level information extraction to reduce the need for custom software and rules development

    PubMed Central

    Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D

    2011-01-01

    Objective Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. Materials and methods A ‘learn by example’ approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Results Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Conclusion Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download. PMID:21697292

  4. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  5. Data Mining: The Art of Automated Knowledge Extraction

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.

    2012-12-01

    Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.

  6. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    NASA Astrophysics Data System (ADS)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  7. Plasmid purification by phenol extraction from guanidinium thiocyanate solution: development of an automated protocol.

    PubMed

    Fisher, J A; Favreau, M B

    1991-05-01

    We have developed a novel plasmid isolation procedure and have adapted it for use on an automated nucleic acid extraction instrument. The protocol is based on the finding that phenol extraction of a 1 M guanidinium thiocyanate solution at pH 4.5 efficiently removes genomic DNA from the aqueous phase, while supercoiled plasmid DNA is retained in the aqueous phase. S1 nuclease digestion of the removed genomic DNA shows that it has been denatured, which presumably confers solubility in the organic phase. The complete automated protocol for plasmid isolation involves pretreatment of bacterial cells successively with lysozyme, RNase A, and proteinase K. Following these digestions, the solution is extracted twice with a phenol/chloroform/water mixture and once with chloroform. Purified plasmid is then collected by isopropanol precipitation. The purified plasmid is essentially free of genomic DNA, RNA, and protein and is a suitable substrate for DNA sequencing and other applications requiring highly pure supercoiled plasmid. PMID:1713749

  8. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications.

    PubMed

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik; Rand, Kasper D; Honoré Hansen, Steen; Petersen, Nickolaj Jacob

    2016-07-01

    The current work describes the implementation of electro membrane extraction (EME) into an autosampler for high-throughput analysis of samples by EME-LC-MS. The extraction probe was built into a luer lock adapter connected to a HTC PAL autosampler syringe. As the autosampler drew sample solution, analytes were extracted into the lumen of the extraction probe and transferred to a LC-MS system for further analysis. Various parameters affecting extraction efficacy were investigated including syringe fill strokes, syringe pull up volume, pull up delay and volume in the sample vial. The system was optimized for soft extraction of analytes and high sample throughput. Further, it was demonstrated that by flushing the EME-syringe with acidic wash buffer and reverting the applied electric potential, carry-over between samples can be reduced to below 1%. Performance of the system was characterized (RSD, <10%; R(2), 0.994) and finally, the EME-autosampler was used to analyze in vitro conversion of methadone into its main metabolite by rat liver microsomes and for demonstrating the potential of known CYP3A4 inhibitors to prevent metabolism of methadone. By making use of the high extraction speed of EME, a complete analytical workflow of purification, separation, and analysis of sample could be achieved within only 5.5 min. With the developed system large sequences of samples could be analyzed in a completely automated manner. This high degree of automation makes the developed EME-autosampler a powerful tool for a wide range of applications where high-throughput extractions are required before sample analysis. PMID:27237618

  9. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  10. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  11. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  12. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  13. Automated identification of adverse events related to central venous catheters.

    PubMed

    Penz, Janet F E; Wilcox, Adam B; Hurdle, John F

    2007-04-01

    Methods for surveillance of adverse events (AEs) in clinical settings are limited by cost, technology, and appropriate data availability. In this study, two methods for semi-automated review of text records within the Veterans Administration database are utilized to identify AEs related to the placement of central venous catheters (CVCs): a Natural Language Processing program and a phrase-matching algorithm. A sample of manually reviewed records were then compared to the results of both methods to assess sensitivity and specificity. The phrase-matching algorithm was found to be a sensitive but relatively non-specific method, whereas a natural language processing system was significantly more specific but less sensitive. Positive predictive values for each method estimated the CVC-associated AE rate at this institution to be 6.4 and 6.2%, respectively. Using both methods together results in acceptable sensitivity and specificity (72.0 and 80.1%, respectively). All methods including manual chart review are limited by incomplete or inaccurate clinician documentation. A secondary finding was related to the completeness of administrative data (ICD-9 and CPT codes) used to identify intensive care unit patients in whom a CVC was placed. Administrative data identified less than 11% of patients who had a CVC placed. This suggests that other methods, including automated methods such as phrase matching, may be more sensitive than administrative data in identifying patients with devices. Considerable potential exists for the use of such methods for the identification of patients at risk, AE surveillance, and prevention of AEs through decision support technologies. PMID:16901760

  14. Automation of Extraction Chromatograhic and Ion Exchange Separations for Radiochemical Analysis and Monitoring

    SciTech Connect

    Grate, Jay W.; O'Hara, Matthew J.; Egorov, Oleg

    2009-08-19

    Radiochemical analysis, complete with the separation of radionuclides of interest from the sample matrix and from other interfering radionuclides, is often an essential step in the determination of the radiochemical composition of a nuclear sample or process stream. Although some radionuclides can be determined nondestructively by gamma spectroscopy, where the gamma rays penetrate significant distances in condensed media and the gamma ray energies are diagnostic for specific radionuclides, other radionuclides that may be of interest emit only alpha or beta particles. For these, samples must be taken for destructive analysis and radiochemical separations are required. For process monitoring purposes, the radiochemical separation and detection methods must be rapid so that the results will be timely. These results could be obtained by laboratory analysis or by radiochemical process analyzers operating on-line or at-site. In either case, there is a need for automated radiochemical analysis methods to provide speed, throughput, safety, and consistent analytical protocols. Classical methods of separation used during the development of nuclear technologies, namely manual precipitations, solvent extractions, and ion exchange, are slow and labor intensive. Fortunately, the convergence of digital instrumentation for preprogrammed fluid manipulation and the development of new separation materials for column-based isolation of radionuclides has enabled the development of automated radiochemical analysis methodology. The primary means for separating radionuclides in solution are liquid-liquid extraction and ion exchange. These processes are well known and have been reviewed in the past.1 Ion exchange is readily employed in column formats. Liquid-liquid extraction can also be implemented on column formats using solvent-impregnated resins as extraction chromatographic materials. The organic liquid extractant is immobilized in the pores of a microporous polymer material. Under

  15. Evaluation of an automated hydrolysis and extraction method for quantification of total fat and lipid classess in cereal products.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The utility of an automated acid hydrolysis-extraction (AHE) system was evaluated for extraction of fat for the quantification of total, saturated, polyunsaturated, monounsaturated, and trans fat in cereal products. Oil extracted by the AHE system was assessed for total fat gravimetrically and by c...

  16. Automated extraction of acetylgestagens from kidney fat by matrix solid phase dispersion.

    PubMed

    Rosén, J; Hellenäs, K E; Törnqvist, P; Shearan, P

    1994-12-01

    A new extraction method for the acetylgestagens medroxyprogesterone acetate (MPA), chloromadinone acetate and megestrol acetate, from kidney fat, has been developed. The method is a combination of matrix solid phase dispersion and solid phase extraction and is simpler and safer than previous methods, especially as it can be automated. The recovery was estimated as 59 +/- 5% (mean +/- standard deviation) for MPA. For screening purposes detection can be achieved using a commercially available enzyme immunoassay kit giving detection limits in the range of 1.0-2.0 ng g-1. PMID:7533481

  17. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  18. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  19. Knowledge-based automated road network extraction system using multispectral images

    NASA Astrophysics Data System (ADS)

    Sun, Weihua; Messinger, David W.

    2013-04-01

    A novel approach for automated road network extraction from multispectral WorldView-2 imagery using a knowledge-based system is presented. This approach uses a multispectral flood-fill technique to extract asphalt pixels from satellite images; it follows by identifying prominent curvilinear structures using template matching. The extracted curvilinear structures provide an initial estimate of the road network, which is refined by the knowledge-based system. This system breaks the curvilinear structures into small segments and then groups them using a set of well-defined rules; a saliency check is then performed to prune the road segments. As a final step, these segments, carrying road width and orientation information, can be reconstructed to generate a proper road map. The approach is shown to perform well with various urban and suburban scenes. It can also be deployed to extract the road network in large-scale scenes.

  20. Automated Extraction of Absorption Bands from Reflectance Special

    NASA Technical Reports Server (NTRS)

    Huguenin, R. L.; Vale, L.; Mcintire, D.; Jones, J.

    1985-01-01

    A multiple high order derivative spectroscopy technique has been developed for deriving wavelength positions, half widths, and heights of absorption bands in reflectance spectra. The technique is applicable to laboratory spectra as well as medium resolution (100-200/cm) telescope or spacecraft spectra with moderate (few percent) noise. The technique permits absorption band positions to be detected with an accuracy of better than 3%, and often better than 1%. The high complexity of radiative transfer processes in diffusely reflected spectra can complicate the determination of absorption band positions. Continuum reflections, random illumination geometries within the material, phase angle effects, composite overlapping bands, and calibration uncertainties can shift apparent band positions by 20% from their actual positions or mask them beyond detection. Using multiple high order derivative analysis, effects of scattering continua, phase angle, and calibration (smooth features) are suppressed. Inflection points that characterize the positions and half widths of constituent bands are enhanced by the process and directly detected with relatively high sensitivity.

  1. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  2. Automated diagnosis of Age-related Macular Degeneration using greyscale features from digital fundus images.

    PubMed

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Koh, Joel E W; Chandran, Vinod; Chua, Chua Kuang; Tan, Jen Hong; Lim, Choo Min; Ng, E Y K; Noronha, Kevin; Tong, Louis; Laude, Augustinus

    2014-10-01

    Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback-Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that

  3. Analyzing Automated Instructional Systems: Metaphors from Related Design Professions.

    ERIC Educational Resources Information Center

    Jonassen, David H.; Wilson, Brent G.

    Noting that automation has had an impact on virtually every manufacturing and information operation in the world, including instructional design (ID), this paper suggests three basic metaphors for automating instructional design activities: (1) computer-aided design and manufacturing (CAD/CAM) systems; (2) expert system advisor systems; and (3)…

  4. ANALYSIS OF SELECTED FACTORS RELATIVE TO AUTOMATED SCHOOL SCHEDULING PROCESSES.

    ERIC Educational Resources Information Center

    CHAFFEE, LEONARD M.; HELLER, ROBERT W.

    PROJECT PASS (PROJECT IN AUTOMATED SCHOOL SCHEDULING) WAS SPONSORED IN 1965 BY THE WESTERN NEW YORK SCHOOL STUDY COUNCIL TO PROVIDE IN-SERVICE EDUCATION FOR SCHOOL PERSONNEL CONTEMPLATING THE USE OF AUTOMATED APPROACHES TO SCHOOL SCHEDULING. TWO TECHNIQUES WERE UTILIZED--CLASS LOADING AND STUDENT SELECTION (CLASS), AND GENERAL ACADEMIC SIMULATION…

  5. BRONCO: Biomedical entity Relation ONcology COrpus for extracting gene-variant-disease-drug relations

    PubMed Central

    Lee, Kyubum; Lee, Sunwon; Park, Sungjoon; Kim, Sunkyu; Kim, Suhkyung; Choi, Kwanghun; Tan, Aik Choon; Kang, Jaewoo

    2016-01-01

    Comprehensive knowledge of genomic variants in a biological context is key for precision medicine. As next-generation sequencing technologies improve, the amount of literature containing genomic variant data, such as new functions or related phenotypes, rapidly increases. Because numerous articles are published every day, it is almost impossible to manually curate all the variant information from the literature. Many researchers focus on creating an improved automated biomedical natural language processing (BioNLP) method that extracts useful variants and their functional information from the literature. However, there is no gold-standard data set that contains texts annotated with variants and their related functions. To overcome these limitations, we introduce a Biomedical entity Relation ONcology COrpus (BRONCO) that contains more than 400 variants and their relations with genes, diseases, drugs and cell lines in the context of cancer and anti-tumor drug screening research. The variants and their relations were manually extracted from 108 full-text articles. BRONCO can be utilized to evaluate and train new methods used for extracting biomedical entity relations from full-text publications, and thus be a valuable resource to the biomedical text mining research community. Using BRONCO, we quantitatively and qualitatively evaluated the performance of three state-of-the-art BioNLP methods. We also identified their shortcomings, and suggested remedies for each method. We implemented post-processing modules for the three BioNLP methods, which improved their performance. Database URL: http://infos.korea.ac.kr/bronco PMID:27074804

  6. BRONCO: Biomedical entity Relation ONcology COrpus for extracting gene-variant-disease-drug relations.

    PubMed

    Lee, Kyubum; Lee, Sunwon; Park, Sungjoon; Kim, Sunkyu; Kim, Suhkyung; Choi, Kwanghun; Tan, Aik Choon; Kang, Jaewoo

    2016-01-01

    Comprehensive knowledge of genomic variants in a biological context is key for precision medicine. As next-generation sequencing technologies improve, the amount of literature containing genomic variant data, such as new functions or related phenotypes, rapidly increases. Because numerous articles are published every day, it is almost impossible to manually curate all the variant information from the literature. Many researchers focus on creating an improved automated biomedical natural language processing (BioNLP) method that extracts useful variants and their functional information from the literature. However, there is no gold-standard data set that contains texts annotated with variants and their related functions. To overcome these limitations, we introduce a Biomedical entity Relation ONcology COrpus (BRONCO) that contains more than 400 variants and their relations with genes, diseases, drugs and cell lines in the context of cancer and anti-tumor drug screening research. The variants and their relations were manually extracted from 108 full-text articles. BRONCO can be utilized to evaluate and train new methods used for extracting biomedical entity relations from full-text publications, and thus be a valuable resource to the biomedical text mining research community. Using BRONCO, we quantitatively and qualitatively evaluated the performance of three state-of-the-art BioNLP methods. We also identified their shortcomings, and suggested remedies for each method. We implemented post-processing modules for the three BioNLP methods, which improved their performance.Database URL:http://infos.korea.ac.kr/bronco. PMID:27074804

  7. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine

  8. Extraction, identification, and functional characterization of a bioactive substance from automated compound-handling plastic tips.

    PubMed

    Watson, John; Greenough, Emily B; Leet, John E; Ford, Michael J; Drexler, Dieter M; Belcastro, James V; Herbst, John J; Chatterjee, Moneesh; Banks, Martyn

    2009-06-01

    Disposable plastic labware is ubiquitous in contemporary pharmaceutical research laboratories. Plastic labware is routinely used for chemical compound storage and during automated liquid-handling processes that support assay development, high-throughput screening, structure-activity determinations, and liability profiling. However, there is little information available in the literature on the contaminants released from plastic labware upon DMSO exposure and their resultant effects on specific biological assays. The authors report here the extraction, by simple DMSO washing, of a biologically active substance from one particular size of disposable plastic tips used in automated compound handling. The active contaminant was identified as erucamide ((Z)-docos-13-enamide), a long-chain mono-unsaturated fatty acid amide commonly used in plastics manufacturing, by gas chromatography/mass spectroscopy analysis of the DMSO-extracted material. Tip extracts prepared in DMSO, as well as a commercially obtained sample of erucamide, were active in a functional bioassay of a known G-protein-coupled fatty acid receptor. A sample of a different disposable tip product from the same vendor did not release detectable erucamide following solvent extraction, and DMSO extracts prepared from this product were inactive in the receptor functional assay. These results demonstrate that solvent-extractable contaminants from some plastic labware used in the contemporary pharmaceutical research and development (R&D) environment can be introduced into physical and biological assays during routine compound management liquid-handling processes. These contaminants may further possess biological activity and are therefore a potential source of assay-specific confounding artifacts. PMID:19470712

  9. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  10. Automated CO2 extraction from air for clumped isotope analysis in the atmo- and biosphere

    NASA Astrophysics Data System (ADS)

    Hofmann, Magdalena; Ziegler, Martin; Pons, Thijs; Lourens, Lucas; Röckmann, Thomas

    2015-04-01

    The conventional stable isotope ratios 13C/12C and 18O/16O in atmospheric CO2 are a powerful tool for unraveling the global carbon cycle. In recent years, it has been suggested that the abundance of the very rare isotopologue 13C18O16O on m/z 47 might be a promising tracer to complement conventional stable isotope analysis of atmospheric CO2 [Affek and Eiler, 2006; Affek et al. 2007; Eiler and Schauble, 2004; Yeung et al., 2009]. Here we present an automated analytical system that is designed for clumped isotope analysis of atmo- and biospheric CO2. The carbon dioxide gas is quantitatively extracted from about 1.5L of air (ATP). The automated stainless steel extraction and purification line consists of three main components: (i) a drying unit (a magnesium perchlorate unit and a cryogenic water trap), (ii) two CO2 traps cooled with liquid nitrogen [Werner et al., 2001] and (iii) a GC column packed with Porapak Q that can be cooled with liquid nitrogen to -30°C during purification and heated up to 230°C in-between two extraction runs. After CO2 extraction and purification, the CO2 is automatically transferred to the mass spectrometer. Mass spectrometric analysis of the 13C18O16O abundance is carried out in dual inlet mode on a MAT 253 mass spectrometer. Each analysis generally consists of 80 change-over-cycles. Three additional Faraday cups were added to the mass spectrometer for simultaneous analysis of the mass-to-charge ratios 44, 45, 46, 47, 48 and 49. The reproducibility for δ13C, δ18O and Δ47 for repeated CO2 extractions from air is in the range of 0.11o (SD), 0.18o (SD) and 0.02 (SD)o respectively. This automated CO2 extraction and purification system will be used to analyse the clumped isotopic signature in atmospheric CO2 (tall tower, Cabauw, Netherlands) and to study the clumped isotopic fractionation during photosynthesis (leaf chamber experiments) and soil respiration. References Affek, H. P., Xu, X. & Eiler, J. M., Geochim. Cosmochim. Acta 71, 5033

  11. Strategies for Medical Data Extraction and Presentation Part 3: Automated Context- and User-Specific Data Extraction.

    PubMed

    Reiner, Bruce

    2015-08-01

    In current medical practice, data extraction is limited by a number of factors including lack of information system integration, manual workflow, excessive workloads, and lack of standardized databases. The combined limitations result in clinically important data often being overlooked, which can adversely affect clinical outcomes through the introduction of medical error, diminished diagnostic confidence, excessive utilization of medical services, and delays in diagnosis and treatment planning. Current technology development is largely inflexible and static in nature, which adversely affects functionality and usage among the diverse and heterogeneous population of end users. In order to address existing limitations in medical data extraction, alternative technology development strategies need to be considered which incorporate the creation of end user profile groups (to account for occupational differences among end users), customization options (accounting for individual end user needs and preferences), and context specificity of data (taking into account both the task being performed and data subject matter). Creation of the proposed context- and user-specific data extraction and presentation templates offers a number of theoretical benefits including automation and improved workflow, completeness in data search, ability to track and verify data sources, creation of computerized decision support and learning tools, and establishment of data-driven best practice guidelines. PMID:25833768

  12. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  13. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  14. Automated extraction of fine features of kinetochore microtubules and plus-ends from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    Kinetochore microtubules (KMTs) and the associated plus-ends have been areas of intense investigation in both cell biology and molecular medicine. Though electron tomography opens up new possibilities in understanding their function by imaging their high-resolution structures, the interpretation of the acquired data remains an obstacle because of the complex and cluttered cellular environment. As a result, practical segmentation of the electron tomography data has been dominated by manual operation, which is time consuming and subjective. In this paper, we propose a model-based automated approach to extracting KMTs and the associated plus-ends with a coarse-to-fine scale scheme consisting of volume preprocessing, microtubule segmentation and plus-end tracing. In volume preprocessing, we first apply an anisotropic invariant wavelet transform and a tube-enhancing filter to enhance the microtubules at coarse level for localization. This is followed with a surface-enhancing filter to accentuate the fine microtubule boundary features. The microtubule body is then segmented using a modified active shape model method. Starting from the segmented microtubule body, the plus-ends are extracted with a probabilistic tracing method improved with rectangular window based feature detection and the integration of multiple cues. Experimental results demonstrate that our automated method produces results comparable to manual segmentation but using only a fraction of the manual segmentation time. PMID:16830922

  15. Automated Detection and Extraction of Coronal Dimmings from SDO/AIA Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair R.; Attrill, G. D. R.; Wills-Davey, M. J.

    2010-05-01

    The sheer volume of data anticipated from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) highlights the necessity for the development of automatic detection methods for various types of solar activity. Initially recognised in the 1970s, it is now well established that coronal dimmings are closely associated with coronal mass ejections (CMEs), and are particularly recognised as an indicator of front-side (halo) CMEs, which can be difficult to detect in white-light coronagraph data. An automated coronal dimming region detection and extraction algorithm removes visual observer bias from determination of physical quantities such as spatial location, area and volume. This allows reproducible, quantifiable results to be mined from very large datasets. The information derived may facilitate more reliable early space weather detection, as well as offering the potential for conducting large-sample studies focused on determining the geoeffectiveness of CMEs, coupled with analysis of their associated coronal dimmings. We present examples of dimming events extracted using our algorithm from existing EUV data, demonstrating the potential for the anticipated application to SDO/AIA data. Metadata returned by our algorithm include: location, area, volume, mass and dynamics of coronal dimmings. As well as running on historic datasets, this algorithm is capable of detecting and extracting coronal dimmings in near real-time. The coronal dimming detection and extraction algorithm described in this poster is part of the SDO/Computer Vision Center effort hosted at SAO (Martens et al., 2009). We acknowledge NASA grant NNH07AB97C.

  16. A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters

    PubMed Central

    Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  17. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    PubMed

    Rapoport, Daniel H; Becker, Tim; Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  18. Automated Large Scale Parameter Extraction of Road-Side Trees Sampled by a Laser Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Lindenbergh, R. C.; Berthold, D.; Sirmacek, B.; Herrero-Huerta, M.; Wang, J.; Ebersbach, D.

    2015-08-01

    In urbanized Western Europe trees are considered an important component of the built-up environment. This also means that there is an increasing demand for tree inventories. Laser mobile mapping systems provide an efficient and accurate way to sample the 3D road surrounding including notable roadside trees. Indeed, at, say, 50 km/h such systems collect point clouds consisting of half a million points per 100m. Method exists that extract tree parameters from relatively small patches of such data, but a remaining challenge is to operationally extract roadside tree parameters at regional level. For this purpose a workflow is presented as follows: The input point clouds are consecutively downsampled, retiled, classified, segmented into individual trees and upsampled to enable automated extraction of tree location, tree height, canopy diameter and trunk diameter at breast height (DBH). The workflow is implemented to work on a laser mobile mapping data set sampling 100 km of road in Sachsen, Germany and is tested on a stretch of road of 7km long. Along this road, the method detected 315 trees that were considered well detected and 56 clusters of tree points were no individual trees could be identified. Using voxels, the data volume could be reduced by about 97 % in a default scenario. Processing the results of this scenario took ~2500 seconds, corresponding to about 10 km/h, which is getting close to but is still below the acquisition rate which is estimated at 50 km/h.

  19. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object

  20. Automated extraction and classification of time-frequency contours in humpback vocalizations.

    PubMed

    Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O

    2013-01-01

    A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types. PMID:23297903

  1. Extraction of words from the national ID cards for automated recognition

    NASA Astrophysics Data System (ADS)

    Akhter, Md. Rezwan; Bhuiyan, Md. Hasanuzzaman; Uddin, Mohammad Shorif

    2011-10-01

    The government of Bangladesh introduced national ID cards in 2008 for all peoples of age 18 years and above. This card is now a de-facto identity document and finds diverse applications in vote casting, bank account opening, telephone subscribing as well as in many real life transactions and security checking. To get real fruits of this versatile ID card, automated retrieving and recognition of an independent person from this extra large national database is an ultimate necessity. This work is the first step to fill this gap in making the recognition in automated fashion. Here we have investigated an image analysis technique to extract the words that will be used in subsequent recognition steps. At first scanned ID card image is used as an input into the computer system and then the target text region is separated from the picture region. The text region is used for separation of lines and words on the basis of the vertical and horizontal projections of image intensity, respectively. Experimentation using real national ID cards confirms the effectiveness of our technique.

  2. Reference line extraction for automated data-entry system using wavelet transform

    NASA Astrophysics Data System (ADS)

    Chitwong, Sakreya; Phonsri, Seksan; Thitimajshima, Punya

    1999-12-01

    It is common that most document forms opt for the use of straight line as a reference position for filled information. The automated data-entry systems of such documents require an ability to search these reference lines so that the location of information in the forms can be known. This paper proposes a wavelet-based algorithm for extracting these reference lines in business forms. Stationary wavelet transform is used to transform a gray-level document image into different frequency-band images. The horizontal detail subband is then selected and passed through a post-processing to produce a binary bitmap of reference lines. The experimental results on synthetic and real document images will be given to illustrate the usefulness of such an algorithm.

  3. Automated Extraction of Dose/Volume Statistics for Radiotherapy-Treatment-Plan Evaluation in Clinical-Trial Quality Assurance

    PubMed Central

    Gong, Yutao U. T.; Yu, Jialu; Pang, Dalong; Zhen, Heming; Galvin, James; Xiao, Ying

    2016-01-01

    Radiotherapy clinical-trial quality assurance is a crucial yet challenging process. This note presents a tool that automatically extracts dose/volume statistics for determining dosimetry compliance review with improved efficiency and accuracy. A major objective of this study is to develop an automated solution for clinical-trial radiotherapy dosimetry review. PMID:26973814

  4. FBI DRUGFIRE program: the development and deployment of an automated firearms identification system to support serial, gang, and drug-related shooting investigations

    NASA Astrophysics Data System (ADS)

    Sibert, Robert W.

    1994-03-01

    The FBI DRUGFIRE Program entails the continuing phased development and deployment of a scalable automated firearms identification system. The first phase of this system, a networked, database-driven firearms evidence imaging system, has been operational for approximately one year and has demonstrated its effectiveness in facilitating the sharing and linking of firearms evidence collected in serial, gang, and drug-related shooting investigations. However, there is a pressing need for development of enhancements which will more fully automate the system so that it is capable of processing very large volumes of firearms evidence. These enhancements would provide automated image analysis and pattern matching functionalities. Existing `spin off' technologies need to be integrated into the present DRUGFIRE system to automate the 3-D mensuration, registration, feature extraction, and matching of the microtopographical surface features imprinted on the primers of fired casings during firing.

  5. Support Vector Machine with Ensemble Tree Kernel for Relation Extraction.

    PubMed

    Liu, Xiaoyong; Fu, Hui; Du, Zhiguo

    2016-01-01

    Relation extraction is one of the important research topics in the field of information extraction research. To solve the problem of semantic variation in traditional semisupervised relation extraction algorithm, this paper proposes a novel semisupervised relation extraction algorithm based on ensemble learning (LXRE). The new algorithm mainly uses two kinds of support vector machine classifiers based on tree kernel for integration and integrates the strategy of constrained extension seed set. The new algorithm can weaken the inaccuracy of relation extraction, which is caused by the phenomenon of semantic variation. The numerical experimental research based on two benchmark data sets (PropBank and AIMed) shows that the LXRE algorithm proposed in the paper is superior to other two common relation extraction methods in four evaluation indexes (Precision, Recall, F-measure, and Accuracy). It indicates that the new algorithm has good relation extraction ability compared with others. PMID:27118966

  6. Support Vector Machine with Ensemble Tree Kernel for Relation Extraction

    PubMed Central

    Fu, Hui; Du, Zhiguo

    2016-01-01

    Relation extraction is one of the important research topics in the field of information extraction research. To solve the problem of semantic variation in traditional semisupervised relation extraction algorithm, this paper proposes a novel semisupervised relation extraction algorithm based on ensemble learning (LXRE). The new algorithm mainly uses two kinds of support vector machine classifiers based on tree kernel for integration and integrates the strategy of constrained extension seed set. The new algorithm can weaken the inaccuracy of relation extraction, which is caused by the phenomenon of semantic variation. The numerical experimental research based on two benchmark data sets (PropBank and AIMed) shows that the LXRE algorithm proposed in the paper is superior to other two common relation extraction methods in four evaluation indexes (Precision, Recall, F-measure, and Accuracy). It indicates that the new algorithm has good relation extraction ability compared with others. PMID:27118966

  7. Rapid and automated sample preparation for nucleic acid extraction on a microfluidic CD (compact disk)

    NASA Astrophysics Data System (ADS)

    Kim, Jitae; Kido, Horacio; Zoval, Jim V.; Gagné, Dominic; Peytavi, Régis; Picard, François J.; Bastien, Martine; Boissinot, Maurice; Bergeron, Michel G.; Madou, Marc J.

    2006-01-01

    Rapid and automated preparation of PCR (polymerase chain reaction)-ready genomic DNA was demonstrated on a multiplexed CD (compact disk) platform by using hard-to-lyse bacterial spores. Cell disruption is carried out while beadcell suspensions are pushed back and forth in center-tapered lysing chambers by angular oscillation of the disk - keystone effect. During this lysis period, the cell suspensions are securely held within the lysing chambers by heatactivated wax valves. Upon application of a remote heat to the disk in motion, the wax valves release lysate solutions into centrifuge chambers where cell debris are separated by an elevated rotation of the disk. Only debris-free DNA extract is then transferred to collection chambers by capillary-assisted siphon and collected for heating that inactivates PCR inhibitors. Lysing capacity was evaluated using a real-time PCR assay to monitor the efficiency of Bacillus globigii spore lysis. PCR analysis showed that 5 minutes' CD lysis run gave spore lysis efficiency similar to that obtained with a popular commercial DNA extraction kit (i.e., IDI-lysis kit from GeneOhm Sciences Inc.) which is highly efficient for microbial cell and spore lysis. This work will contribute to the development of an integrated CD-based assay for rapid diagnosis of infectious diseases.

  8. Automated data extraction from in situ protein-stable isotope probing studies.

    PubMed

    Slysz, Gordon W; Steinke, Laurey; Ward, David M; Klatt, Christian G; Clauss, Therese R W; Purvine, Samuel O; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D; Lipton, Mary S

    2014-03-01

    Protein-stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism(s), a key application will be in situ studies of microbial communities for short periods of time under natural conditions that result in small degrees of partial labeling. One hurdle restricting large-scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large-scale extraction and visualization of data from short-term (3 h) protein-SIP experiments performed in situ on phototrophic bacterial mats isolated from Yellowstone National Park. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification. PMID:24467184

  9. Automated data extraction from in situ protein stable isotope probing studies

    SciTech Connect

    Slysz, Gordon W.; Steinke, Laurey A.; Ward, David M.; Klatt, Christian G.; Clauss, Therese RW; Purvine, Samuel O.; Payne, Samuel H.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2014-01-27

    Protein stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism, a key application will be in situ studies of microbial communities under conditions that result in small degrees of partial labeling. One hurdle restricting large scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large scale extraction and visualization of data from short term (3 h) protein-SIP experiments performed in situ on Yellowstone phototrophic bacterial mats. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification.

  10. Streamlining DNA Barcoding Protocols: Automated DNA Extraction and a New cox1 Primer in Arachnid Systematics

    PubMed Central

    Vidergar, Nina; Toplak, Nataša; Kuntner, Matjaž

    2014-01-01

    Background DNA barcoding is a popular tool in taxonomic and phylogenetic studies, but for most animal lineages protocols for obtaining the barcoding sequences—mitochondrial cytochrome C oxidase subunit I (cox1 AKA CO1)—are not standardized. Our aim was to explore an optimal strategy for arachnids, focusing on the species-richest lineage, spiders by (1) improving an automated DNA extraction protocol, (2) testing the performance of commonly used primer combinations, and (3) developing a new cox1 primer suitable for more efficient alignment and phylogenetic analyses. Methodology We used exemplars of 15 species from all major spider clades, processed a range of spider tissues of varying size and quality, optimized genomic DNA extraction using the MagMAX Express magnetic particle processor—an automated high throughput DNA extraction system—and tested cox1 amplification protocols emphasizing the standard barcoding region using ten routinely employed primer pairs. Results The best results were obtained with the commonly used Folmer primers (LCO1490/HCO2198) that capture the standard barcode region, and with the C1-J-2183/C1-N-2776 primer pair that amplifies its extension. However, C1-J-2183 is designed too close to HCO2198 for well-interpreted, continuous sequence data, and in practice the resulting sequences from the two primer pairs rarely overlap. We therefore designed a new forward primer C1-J-2123 60 base pairs upstream of the C1-J-2183 binding site. The success rate of this new primer (93%) matched that of C1-J-2183. Conclusions The use of C1-J-2123 allows full, indel-free overlap of sequences obtained with the standard Folmer primers and with C1-J-2123 primer pair. Our preliminary tests suggest that in addition to spiders, C1-J-2123 will also perform in other arachnids and several other invertebrates. We provide optimal PCR protocols for these primer sets, and recommend using them for systematic efforts beyond DNA barcoding. PMID:25415202

  11. AUTOMATION.

    ERIC Educational Resources Information Center

    Manpower Research Council, Milwaukee, WI.

    THE MANPOWER RESEARCH COUNCIL, A NONPROFIT SERVICE ORGANIZATION, HAS AS ITS OBJECTIVE THE DEVELOPMENT OF AN INTERCHANGE AMONG THE MANUFACTURING AND SERVICE INDUSTRIES OF THE UNITED STATES OF INFORMATION ON EMPLOYMENT, INDUSTRIAL RELATIONS TRENDS AND ACTIVITIES, AND MANAGEMENT PROBLEMS. A SURVEY OF 200 MEMBER CORPORATIONS, EMPLOYING A TOTAL OF…

  12. Californian demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2013-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning. To date, field objects have not been extracted from satellite data over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. We present a fully automated computational methodology to extract agricultural fields from 30m Web Enabled Landsat data (WELD) time series and results for approximately 250,000 square kilometers (eleven 150 x 150 km WELD tiles) encompassing all the major agricultural areas of California. The extracted fields, including rectangular, circular, and irregularly shaped fields, are evaluated by comparison with manually interpreted Landsat field objects. Validation results are presented in terms of standard confusion matrix accuracy measures and also the degree of field object over-segmentation, under-segmentation, fragmentation and shape distortion. The apparent success of the presented field extraction methodology is due to several factors. First, the use of multi-temporal Landsat data, as opposed to single Landsat acquisitions, that enables crop rotations and inter-annual variability in the state of the vegetation to be accommodated for and provides more opportunities for cloud-free, non-missing and atmospherically uncontaminated surface observations. Second, the adoption of an object based approach, namely the variational region-based geometric active contour method that enables robust segmentation with only a small number of parameters and that requires no training data collection. Third, the use of a watershed algorithm to decompose connected segments belonging to multiple fields into coherent isolated field segments and a geometry based algorithm to detect and associate parts of

  13. PKDE4J: Entity and relation extraction for public knowledge discovery.

    PubMed

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. PMID:26277115

  14. Automated Outreach for Cardiovascular-Related Medication Refill Reminders.

    PubMed

    Harrison, Teresa N; Green, Kelley R; Liu, In-Lu Amy; Vansomphone, Southida S; Handler, Joel; Scott, Ronald D; Cheetham, T Craig; Reynolds, Kristi

    2016-07-01

    The objective of this study was to evaluate the effectiveness of an automated telephone system reminding patients with hypertension and/or cardiovascular disease to obtain overdue medication refills. The authors compared the intervention with usual care among patients with an overdue prescription for a statin or lisinopril-hydrochlorothiazide (lisinopril-HCTZ). The primary outcome was refill rate at 2 weeks. Secondary outcomes included time to refill and change in low-density lipoprotein cholesterol and blood pressure. Significantly more patients who received a reminder call refilled their prescription compared with the usual-care group (statin cohort: 30.3% vs 24.9% [P<.0001]; lisinopril-HCTZ cohort: 30.7% vs 24.2% [P<.0001]). The median time to refill was shorter in patients receiving the reminder call (statin cohort: 29 vs 36 days [P<.0001]; lisinopril-HCTZ cohort: 24 vs 31 days [P<.0001]). There were no statistically significant differences in mean low-density lipoprotein cholesterol and blood pressure. These findings suggest the need for interventions that have a longer-term impact. PMID:26542896

  15. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image

    NASA Astrophysics Data System (ADS)

    Al Fugura, A.'kif; Billa, Lawal; Pradhan, Biswajeet

    2011-12-01

    Coastline identification is important for surveying and mapping reasons. Coastline serves as the basic point of reference and is used on nautical charts for navigation purposes. Its delineation has become crucial and more important in the wake of the many recent earthquakes and tsunamis resulting in complete change and redraw of some shorelines. In a tropical country like Malaysia, presence of cloud cover hinders the application of optical remote sensing data. In this study a semi-automated technique and procedures are presented for shoreline delineation from RADARSAT-1 image. A scene of RADARSAT-1 satellite image was processed using enhanced filtering technique to identify and extract the shoreline coast of Kuala Terengganu, Malaysia. RADSARSAT image has many advantages over the optical data because of its ability to penetrate cloud cover and its night sensing capabilities. At first, speckles were removed from the image by using Lee sigma filter which was used to reduce random noise and to enhance the image and discriminate the boundary between land and water. The results showed an accurate and improved extraction and delineation of the entire coastline of Kuala Terrenganu. The study demonstrated the reliability of the image averaging filter in reducing random noise over the sea surface especially near the shoreline. It enhanced land-water boundary differentiation, enabling better delineation of the shoreline. Overall, the developed techniques showed the potential of radar imagery for accurate shoreline mapping and will be useful for monitoring shoreline changes during high and low tides as well as shoreline erosion in a tropical country like Malaysia.

  16. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    PubMed Central

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Hansson, Göran I.

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is obtained using the chloroform-based reference method. Lipid recoveries were linear from 10–100 µl plasma for all investigated lipids using the developed extraction protocol. The BUME protocol includes an initial one-phase extraction of plasma into 300 µl butanol:methanol (BUME) mixture (3:1) followed by two-phase extraction into 300 µl heptane:ethyl acetate (3:1) using 300 µl 1% acetic acid as buffer. The lipids investigated included the most abundant plasma lipid classes (e.g., cholesterol ester, free cholesterol, triacylglycerol, phosphatidylcholine, and sphingomyelin) as well as less abundant but biologically important lipid classes, including ceramide, diacylglycerol, and lyso-phospholipids. This novel method has been successfully implemented in our laboratory and is now used daily. We conclude that the fully automated, high-throughput BUME method can replace chloroform-based methods, saving both human and environmental resources. PMID:22645248

  17. A Multi-Atlas Based Method for Automated Anatomical Rat Brain MRI Segmentation and Extraction of PET Activity

    PubMed Central

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Introduction Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. Methods High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Results Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Conclusions Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure’s extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses. PMID:25330005

  18. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    PubMed Central

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  19. Automated Agricultural Field Extraction from Multi-temporal Web Enabled Landsat Data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2012-12-01

    Agriculture has caused significant anthropogenic surface change. In many regions agricultural field sizes may be increasing to maximize yields and reduce costs resulting in decreased landscape spatial complexity and increased homogenization of land uses with potential for significant biogeochemical and ecological effects. To date, studies of the incidence, drivers and impacts of changing field sizes have not been undertaken over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. The Landsat series of satellites provides near-global coverage, long term, and appropriate spatial resolution (30m) satellite data to document changing field sizes. The recent free availability of all the Landsat data in the U.S. Landsat archive now provides the opportunity to study field size changes in a global and consistent way. Commercial software can be used to extract fields from Landsat data but are inappropriate for large area application because they require considerable human interaction. This paper presents research to develop and validate an automated computational Geographic Object Based Image Analysis methodology to extract agricultural fields and derive field sizes from Web Enabled Landsat Data (WELD) (http://weld.cr.usgs.gov/). WELD weekly products (30m reflectance and brightness temperature) are classified into Satellite Image Automatic Mapper™ (SIAM™) spectral categories and an edge intensity map and a map of the probability of each pixel being agricultural are derived from five years of 52 weeks of WELD and corresponding SIAM™ data. These data are fused to derive candidate agriculture field segments using a variational region-based geometric active contour model. Geometry-based algorithms are used to decompose connected segments belonging to multiple fields into coherent isolated field objects with a divide and conquer strategy to detect and merge partial circle

  20. Using mobile laser scanning data for automated extraction of road markings

    NASA Astrophysics Data System (ADS)

    Guan, Haiyan; Li, Jonathan; Yu, Yongtao; Wang, Cheng; Chapman, Michael; Yang, Bisheng

    2014-01-01

    A mobile laser scanning (MLS) system allows direct collection of accurate 3D point information in unprecedented detail at highway speeds and at less than traditional survey costs, which serves the fast growing demands of transportation-related road surveying including road surface geometry and road environment. As one type of road feature in traffic management systems, road markings on paved roadways have important functions in providing guidance and information to drivers and pedestrians. This paper presents a stepwise procedure to recognize road markings from MLS point clouds. To improve computational efficiency, we first propose a curb-based method for road surface extraction. This method first partitions the raw MLS data into a set of profiles according to vehicle trajectory data, and then extracts small height jumps caused by curbs in the profiles via slope and elevation-difference thresholds. Next, points belonging to the extracted road surface are interpolated into a geo-referenced intensity image using an extended inverse-distance-weighted (IDW) approach. Finally, we dynamically segment the geo-referenced intensity image into road-marking candidates with multiple thresholds that correspond to different ranges determined by point-density appropriate normality. A morphological closing operation with a linear structuring element is finally used to refine the road-marking candidates by removing noise and improving completeness. This road-marking extraction algorithm is comprehensively discussed in the analysis of parameter sensitivity and overall performance. An experimental study performed on a set of road markings with ground-truth shows that the proposed algorithm provides a promising solution to the road-marking extraction from MLS data.

  1. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry.

    PubMed

    Rosas-Castor, J M; Portugal, L; Ferrer, L; Guzmán-Mar, J L; Hernández-Ramírez, A; Cerdà, V; Hinojosa-Reyes, L

    2015-05-18

    A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L(-1) for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013-0.800, 0.011-0.900 and 0.079-1.400 mg L(-1) for F1, F2, and F3, respectively. The precision of the automated MSFIA-HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L(-1) As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural soil samples from an arsenic-contaminated mining zone to evaluate its extractability. The frequency of analysis of the proposed method was eight times higher than that of the conventional BCR method (6 vs 48 h), and the kinetics of lixiviation were established for each fraction. PMID:25910440

  2. Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline

    ERIC Educational Resources Information Center

    Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.

    2012-01-01

    The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…

  3. Detecting and extracting clusters in atom probe data: a simple, automated method using Voronoi cells.

    PubMed

    Felfer, P; Ceguerra, A V; Ringer, S P; Cairney, J M

    2015-03-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. PMID:25497494

  4. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-01

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. PMID:26772121

  5. Comparative Evaluation of Commercially Available Manual and Automated Nucleic Acid Extraction Methods for Rotavirus RNA Detection in Stool

    PubMed Central

    Esona, Mathew D.; McDonald, Sharla; Kamili, Shifaq; Kerin, Tara; Gautam, Rashi; Bowen, Michael D.

    2015-01-01

    Rotaviruses are a major cause of viral gastroenteritis in children. For accurate and sensitive detection of rotavirus RNA from stool samples by reverse transcription-polymerase chain reaction (RT-PCR), the extraction process must be robust. However, some extraction methods may not remove the strong RT-PCR inhibitors known to be present in stool samples. The objective of this study was to evaluate and compare the performance of six extraction methods used commonly for extraction of rotavirus RNA from stool, which have never been formally evaluated: the MagNA Pure Compact, KingFisher Flex and NucliSENS® easyMAG® instruments, the NucliSENS® miniMAG® semi-automated system, and two manual purification kits, the QIAamp Viral RNA kit and a modified RNaid® kit. Using each method, total nucleic acid or RNA was extracted from eight rotavirus-positive stool samples with enzyme immunoassay optical density (EIA OD) values ranging from 0.176 to 3.098. Extracts prepared using the MagNA Pure Compact instrument yielded the most consistent results by qRT-PCR and conventional RT-PCR. When extracts prepared from a dilution series were extracted by the 6 methods and tested, rotavirus RNA was detected in all samples by qRT-PCR but by conventional RT-PCR testing, only the MagNA Pure Compact and KingFisher Flex extracts were positive in all cases. RT-PCR inhibitors were detected in extracts produced with the QIAamp Viral RNA Mini kit. The findings of this study should prove useful for selection of extraction methods to be incorporated into future rotavirus detection and genotyping protocols. PMID:24036075

  6. Automated Extraction of Buildings and Roads in a Graph Partitioning Framework

    NASA Astrophysics Data System (ADS)

    Ok, A. O.

    2013-10-01

    This paper presents an original unsupervised framework to identify regions belonging to buildings and roads from monocular very high resolution (VHR) satellite images. The proposed framework consists of three main stages. In the first stage, we extract information only related to building regions using shadow evidence and probabilistic fuzzy landscapes. Firstly, the shadow areas cast by building objects are detected and the directional spatial relationship between buildings and their shadows is modelled with the knowledge of illumination direction. Thereafter, each shadow region is handled separately and initial building regions are identified by iterative graph-cuts designed in a two-label partitioning. The second stage of the framework automatically classifies the image into four classes: building, shadow, vegetation, and others. In this step, the previously labelled building regions as well as the shadow and vegetation areas are involved in a four-label graph optimization performed in the entire image domain to achieve the unsupervised classification result. The final stage aims to extend this classification to five classes in which the class road is involved. For that purpose, we extract the regions that might belong to road segments and utilize that information in a final graph optimization. This final stage eventually characterizes the regions belonging to buildings and roads. Experiments performed on seven test images selected from GeoEye-1 VHR datasets show that the presented approach has ability to extract the regions belonging to buildings and roads in a single graph theory framework.

  7. Mixed-mode isolation of triazine metabolites from soil and aquifer sediments using automated solid-phase extraction

    USGS Publications Warehouse

    Mills, M.S.; Thurman, E.M.

    1992-01-01

    Reversed-phase isolation and ion-exchange purification were combined in the automated solid-phase extraction of two polar s-triazine metabolites, 2-amino-4-chloro-6-(isopropylamino)-s-triazine (deethylatrazine) and 2-amino-4-chloro-6-(ethylamino)-s-triazine (deisopropylatrazine) from clay-loam and slit-loam soils and sandy aquifer sediments. First, methanol/ water (4/1, v/v) soil extracts were transferred to an automated workstation following evaporation of the methanol phase for the rapid reversed-phase isolation of the metabolites on an octadecylresin (C18). The retention of the triazine metabolites on C18 decreased substantially when trace methanol concentrations (1%) remained. Furthermore, the retention on C18 increased with decreasing aqueous solubility and increasing alkyl-chain length of the metabolites and parent herbicides, indicating a reversed-phase interaction. The analytes were eluted with ethyl acetate, which left much of the soil organic-matter impurities on the resin. Second, the small-volume organic eluate was purified on an anion-exchange resin (0.5 mL/min) to extract the remaining soil pigments that could foul the ion source of the GC/MS system. Recoveries of the analytes were 75%, using deuterated atrazine as a surrogate, and were comparable to recoveries by soxhlet extraction. The detection limit was 0.1 ??g/kg with a coefficient of variation of 15%. The ease and efficiency of this automated method makes it viable, practical technique for studying triazine metabolites in the environment.

  8. A fully integrated and automated microsystem for rapid pharmacogenetic typing of multiple warfarin-related single-nucleotide polymorphisms.

    PubMed

    Zhuang, Bin; Han, Junping; Xiang, Guangxin; Gan, Wupeng; Wang, Shuaiqin; Wang, Dong; Wang, Lei; Sun, Jing; Li, Cai-Xia; Liu, Peng

    2016-01-01

    A fully integrated and automated microsystem consisting of low-cost, disposable plastic chips for DNA extraction and PCR amplification combined with a reusable glass capillary array electrophoresis chip in a modular-based format was successfully developed for warfarin pharmacogenetic testing. DNA extraction was performed by adopting a filter paper-based method, followed by "in situ" PCR that was carried out directly in the same reaction chamber of the chip without elution. PCR products were then co-injected with sizing standards into separation channels for detection using a novel injection electrode. The entire process was automatically conducted on a custom-made compact control and detection instrument. The limit of detection of the microsystem for the singleplex amplification of amelogenin was determined to be 0.625 ng of standard K562 DNA and 0.3 μL of human whole blood. A two-color multiplex allele-specific PCR assay for detecting the warfarin-related single-nucleotide polymorphisms (SNPs) 6853 (-1639G>A) and 6484 (1173C>T) in the VKORC1 gene and the *3 SNP (1075A>C) in the CYP2C9 gene was developed and used for validation studies. The fully automated genetic analysis was completed in two hours with a minimum requirement of 0.5 μL of input blood. Samples from patients with different genotypes were all accurately analyzed. In addition, both dried bloodstains and oral swabs were successfully processed by the microsystem with a simple modification to the DNA extraction and amplification chip. The successful development and operation of this microsystem establish the feasibility of rapid warfarin pharmacogenetic testing in routine clinical practice. PMID:26568290

  9. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  10. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  11. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  12. Automated solid-phase extraction coupled online with HPLC-FLD for the quantification of zearalenone in edible oil.

    PubMed

    Drzymala, Sarah S; Weiz, Stefan; Heinze, Julia; Marten, Silvia; Prinz, Carsten; Zimathies, Annett; Garbe, Leif-Alexander; Koch, Matthias

    2015-05-01

    Established maximum levels for the mycotoxin zearalenone (ZEN) in edible oil require monitoring by reliable analytical methods. Therefore, an automated SPE-HPLC online system based on dynamic covalent hydrazine chemistry has been developed. The SPE step comprises a reversible hydrazone formation by ZEN and a hydrazine moiety covalently attached to a solid phase. Seven hydrazine materials with different properties regarding the resin backbone, pore size, particle size, specific surface area, and loading have been evaluated. As a result, a hydrazine-functionalized silica gel was chosen. The final automated online method was validated and applied to the analysis of three maize germ oil samples including a provisionally certified reference material. Important performance criteria for the recovery (70-120 %) and precision (RSDr <25 %) as set by the Commission Regulation EC 401/2006 were fulfilled: The mean recovery was 78 % and RSDr did not exceed 8 %. The results of the SPE-HPLC online method were further compared to results obtained by liquid-liquid extraction with stable isotope dilution analysis LC-MS/MS and found to be in good agreement. The developed SPE-HPLC online system with fluorescence detection allows a reliable, accurate, and sensitive quantification (limit of quantification, 30 μg/kg) of ZEN in edible oils while significantly reducing the workload. To our knowledge, this is the first report on an automated SPE-HPLC method based on a covalent SPE approach. PMID:25709066

  13. Automated extraction of 11-nor-delta9-tetrahydrocannabinol carboxylic acid from urine samples using the ASPEC XL solid-phase extraction system.

    PubMed

    Langen, M C; de Bijl, G A; Egberts, A C

    2000-09-01

    The analysis of 11-nor-delta9-tetrahydrocannabinol-carboxylic acid (THCCOOH, the major metabolite of cannabis) in urine with gas chromatography and mass spectrometry (GC-MS) and solid-phase extraction (SPE) sample preparation is well documented. Automated SPE sample preparation of THCCOOH in urine, although potentially advantageous, is to our knowledge poorly investigated. The objective of the present study was to develop and validate an automated SPE sample-preparation step using ASPEC XL suited for GC-MS confirmation analysis of THCCOOH in urine drug control. The recoveries showed that it was not possible to transfer the protocol for the manual SPE procedure with the vacuum manifold to the ASPEC XL without loss of recovery. Making the sample more lipophilic by adding 1 mL 2-propanol after hydrolysis to the urine sample in order to overcome the problem of surface adsorption of THCCOOH led to an extraction efficiency (77%) comparable to that reached with the vacuum manifold (84%). The reproducibility of the automated SPE procedure was better (coefficient of variation 5%) than that of the manual procedure (coefficient of variation 12%). The limit of detection was 1 ng/mL, and the limit of quantitation was 4 ng/mL. Precision at the 12.5-ng/mL level was as follows: mean, 12.4 and coefficient of variation, 3.0%. Potential carryover was evaluated, but a carryover effect could not be detected. It was concluded that the proposed method is suited for GC-MS confirmation urinalysis of THCCOOH for prisons and detoxification centers. PMID:10999349

  14. INVESTIGATION OF ARSENIC SPECIATION ON DRINKING WATER TREATMENT MEDIA UTILIZING AUTOMATED SEQUENTIAL CONTINUOUS FLOW EXTRACTION WITH IC-ICP-MS DETECTION

    EPA Science Inventory

    Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...

  15. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  16. Direct Sampling and Analysis from Solid Phase Extraction Cards using an Automated Liquid Extraction Surface Analysis Nanoelectrospray Mass Spectrometry System

    SciTech Connect

    Walworth, Matthew J; ElNaggar, Mariam S; Stankovich, Joseph J; WitkowskiII, Charles E.; Norris, Jeremy L; Van Berkel, Gary J

    2011-01-01

    Direct liquid extraction based surface sampling, a technique previously demonstrated with continuous flow and autonomous pipette liquid microjunction surface sampling probes, has recently been implemented as the Liquid Extraction Surface Analysis (LESA) mode on the commercially available Advion NanoMate chip-based infusion nanoelectrospray ionization system. In the present paper, the LESA mode was applied to the analysis of 96-well format custom solid phase extraction (SPE) cards, with each well consisting of either a 1 or 2 mm diameter monolithic hydrophobic stationary phase. These substrate wells were conditioned, loaded with either single or multi-component aqueous mixtures, and read out using the LESA mode of a TriVersa NanoMate or a Nanomate 100 coupled to an ABI/Sciex 4000QTRAPTM hybrid triple quadrupole/linear ion trap mass spectrometer and a Thermo LTQ XL linear ion trap mass spectrometer. Extraction conditions, including extraction/nanoESI solvent composition, volume, and dwell times, were optimized in the analysis of targeted compounds. Limit of detection and quantitation as well as analysis reproducibility figures of merit were measured. Calibration data was obtained for propranolol using a deuterated internal standard which demonstrated linearity and reproducibility. A 10x increase in signal and cleanup of micromolar Angiotensin II from a concentrated salt solution was demonstrated. Additionally, a multicomponent herbicide mixture at ppb concentration levels was analyzed using MS3 spectra for compound identification in the presence of isobaric interferences.

  17. High performance liquid chromatography for quantification of gatifloxacin in rat plasma following automated on-line solid phase extraction.

    PubMed

    Tasso, Leandro; Dalla Costa, Teresa

    2007-05-01

    An automated system using on-line solid phase extraction and HPLC with fluorimetric detection was developed and validated for quantification of gatifloxacin in rat plasma. The extraction was carried out using C(18) cartridges (BondElut), with a high extraction yield. After washing, gatifloxacin was eluted from the cartridge with mobile phase onto a C(18) HPLC column. The mobile phase consisted of a mixture of phosphoric acid (2.5mM), methanol, acetonitrile and triethylamine (64.8:15:20:0.2, v/v/v/v, apparent pH(app.) 2.8). All samples and standard solutions were chromatographed at 28 degrees C. The method developed was selective and linear for drug concentrations ranging between 20 and 600 ng/ml. Gatifloxacin recovery ranged from 95.6 to 99.7%, and the limit of quantification was 20 ng/ml. The intra and inter-assay accuracy were up to 94.3%. The precision determined not exceed 5.8% of the CV. High extraction yield up to 95% was obtained. Drug stability in plasma was shown in freezer at -20 degrees C up to 1 month, after three freeze-thaw cycles and for 24h in the autosampler after processing. The assay has been successfully applied to measure gatifloxacin plasma concentrations in pharmacokinetic study in rats. PMID:17403594

  18. Automated on-line liquid-liquid extraction system for temporal mass spectrometric analysis of dynamic samples.

    PubMed

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid-liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053-2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h(-1)). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. PMID:26423626

  19. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  20. Comparative Assessment of Automated Nucleic Acid Sample Extraction Equipment for Biothreat Agents

    PubMed Central

    Kalina, Warren Vincent; Douglas, Christina Elizabeth; Coyne, Susan Rajnik

    2014-01-01

    Magnetic beads offer superior impurity removal and nucleic acid selection over older extraction methods. The performances of nucleic acid extraction of biothreat agents in blood or buffer by easyMAG, MagNA Pure, EZ1 Advanced XL, and Nordiag Arrow were evaluated. All instruments showed excellent performance in blood; however, the easyMAG had the best precision and versatility. PMID:24452173

  1. Coreference based event-argument relation extraction on biomedical text

    PubMed Central

    2011-01-01

    This paper presents a new approach to exploit coreference information for extracting event-argument (E-A) relations from biomedical documents. This approach has two advantages: (1) it can extract a large number of valuable E-A relations based on the concept of salience in discourse; (2) it enables us to identify E-A relations over sentence boundaries (cross-links) using transitivity of coreference relations. We propose two coreference-based models: a pipeline based on Support Vector Machine (SVM) classifiers, and a joint Markov Logic Network (MLN). We show the effectiveness of these models on a biomedical event corpus. Both models outperform the systems that do not use coreference information. When the two proposed models are compared to each other, joint MLN outperforms pipeline SVM with gold coreference information. PMID:22166257

  2. Toward automated parasitic extraction of silicon photonics using layout physical verifications

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamed; El Shamy, Raghi S.; Madkour, Kareem; Hammouda, Sherif; Swillam, Mohamed A.

    2016-08-01

    A physical verification flow of the layout of silicon photonic circuits is suggested. Simple empirical models are developed to estimate the bend power loss and coupled power in photonic integrated circuits fabricated using SOI standard wafers. These models are utilized in physical verification flow of the circuit layout to verify reliable fabrication using any electronic design automation tool. The models are accurate compared with electromagnetic solvers. The models are closed form and circumvent the need to utilize any EM solver for the verification process. Hence, it dramatically reduces the time of the verification process.

  3. Automated Development of Feature Extraction Tools for Planetary Science Image Datasets

    NASA Astrophysics Data System (ADS)

    Plesko, C.; Brumby, S.; Asphaug, E.

    2003-03-01

    We explore development of feature extraction algorithms for Mars Orbiter Camera narrow angle data using GENIE machine learning software. The algorithms are successful at detecting craters within the images, and generalize well to a new image.

  4. Automating identification of adverse events related to abnormal lab results using standard vocabularies.

    PubMed

    Brandt, C A; Lu, C C; Nadkarni, P M

    2005-01-01

    Laboratory data need to be imported automatically into central Clinical Study Data Management Systems (CSDMSs), and abnormal laboratory data need to be linked to clinically related adverse events. This import of laboratory data can be automated through mapping to standard vocabularies with HL7/LOINC mapping to the metadata within a CSDMS. We have designed a system that uses the UMLS metathesaurus as a common source to map or link abnormal laboratory values to adverse event CTCAE coded terms and grades in the metadata of TrialDB, a generic CSDMS. PMID:16779190

  5. Unsupervised entity and relation extraction from clinical records in Italian.

    PubMed

    Alicante, Anita; Corazza, Anna; Isgrò, Francesco; Silvestri, Stefano

    2016-05-01

    This paper proposes and discusses the use of text mining techniques for the extraction of information from clinical records written in Italian. However, as it is very difficult and expensive to obtain annotated material for languages different from English, we only consider unsupervised approaches, where no annotated training set is necessary. We therefore propose a complete system that is structured in two steps. In the first one domain entities are extracted from the clinical records by means of a metathesaurus and standard natural language processing tools. The second step attempts to discover relations between the entity pairs extracted from the whole set of clinical records. For this last step we investigate the performance of unsupervised methods such as clustering in the space of entity pairs, represented by an ad hoc feature vector. The resulting clusters are then automatically labelled by using the most significant features. The system has been tested on a fairly large data set of clinical records in Italian, investigating the variation in the performance adopting different similarity measures in the feature space. The results of our experiments show that the unsupervised approach proposed is promising and well suited for a semi-automatic labelling of the extracted relations. PMID:26851833

  6. Towards a Relation Extraction Framework for Cyber-Security Concepts

    SciTech Connect

    Jones, Corinne L; Bridges, Robert A; Huffer, Kelly M; Goodall, John R

    2015-01-01

    In order to assist security analysts in obtaining information pertaining to their network, such as novel vulnerabilities, exploits, or patches, information retrieval methods tailored to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi-supervised NLP and implement a bootstrapping algorithm for extracting security entities and their relationships from text. The algorithm requires little input data, specifically, a few relations or patterns (heuristics for identifying relations), and incorporates an active learning component which queries the user on the most important decisions to prevent drifting the desired relations. Preliminary testing on a small corpus shows promising results, obtaining precision of .82.

  7. A fully automated system for analysis of pesticides in water: on-line extraction followed by liquid chromatography-tandem photodiode array/postcolumn derivatization/fluorescence detection.

    PubMed

    Patsias, J; Papadopoulou-Mourkidou, E

    1999-01-01

    A fully automated system for on-line solid phase extraction (SPE) followed by high-performance liquid chromatography (HPLC) with tandem detection with a photodiode array detector and a fluorescence detector (after postcolumn derivatization) was developed for analysis of many chemical classes of pesticides and their major conversion products in aquatic systems. An automated on-line-SPE system (Prospekt) operated with reversed-phase cartridges (PRP-1) extracts analytes from 100 mL acidified (pH = 3) filtered water sample. On-line HPLC analysis is performed with a 15 cm C18 analytical column eluted with a mobile phase of phosphate (pH = 3)-acetonitrile in 25 min linear gradient mode. Solutes are detected by tandem diode array/derivatization/fluorescence detection. The system is controlled and monitored by a single computer operated with Millenium software. Recoveries of most analytes in samples fortified at 1 microgram/L are > 90%, with relative standard deviation values of < 5%. For a few very polar analytes, mostly N-methylcarbamoyloximes (i.e., aldicarb sulfone, methomyl, and oxamyl), recoveries are < 20%. However, for these compounds, as well as for the rest of the N-methylcarbamates except for aldicarb sulfoxide and butoxycarboxim, the limits of detection (LODs) are 0.005-0.05 microgram/L. LODs for aldicarb sulfoxide and butoxycarboxim are 0.2 and 0.1 microgram, respectively. LODs for the rest of the analytes except 4-nitrophenol, bentazone, captan, decamethrin, and MCPA are 0.05-0.1 microgram/L. LODs for the latter compounds are 0.2-1.0 microgram/L. The system can be operated unattended. PMID:10444834

  8. Technical Note: Semi-automated effective width extraction from time-lapse RGB imagery of a remote, braided Greenlandic river

    NASA Astrophysics Data System (ADS)

    Gleason, C. J.; Smith, L. C.; Finnegan, D. C.; LeWinter, A. L.; Pitcher, L. H.; Chu, V. W.

    2015-06-01

    River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements. In such cases, remote sensing, especially terrestrial time-lapse imaging platforms, offer a means to better understand these fluvial systems. One such environment is found at the proglacial Isortoq River in southwestern Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible. In order to derive relevant hydraulic parameters for this river, two true color (RGB) cameras were installed in July 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September of 2012. Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study. As such, automated image filters were developed that removed images with environmental obstacles (e.g., shadows, sun glint, snow) from the processing stream. Further image filtering was accomplished via a novel automated histogram similarity filtering process. This similarity filtering allowed successful (mean accuracy 79.6 %) supervised classification of filtered images from training data collected from just 10 % of those images. Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River between 2011 and 2012. This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.

  9. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  10. [Corrected Title: Solid-Phase Extraction of Polar Compounds from Water] Automated Electrostatics Environmental Chamber

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Rutz, Jeffrey; Schultz, John

    2005-01-01

    A solid-phase extraction (SPE) process has been developed for removing alcohols, carboxylic acids, aldehydes, ketones, amines, and other polar organic compounds from water. This process can be either a subprocess of a water-reclamation process or a means of extracting organic compounds from water samples for gas-chromatographic analysis. This SPE process is an attractive alternative to an Environmental Protection Administration liquid-liquid extraction process that generates some pollution and does not work in a microgravitational environment. In this SPE process, one forces a water sample through a resin bed by use of positive pressure on the upstream side and/or suction on the downstream side, thereby causing organic compounds from the water to be adsorbed onto the resin. If gas-chromatographic analysis is to be done, the resin is dried by use of a suitable gas, then the adsorbed compounds are extracted from the resin by use of a solvent. Unlike the liquid-liquid process, the SPE process works in both microgravity and Earth gravity. In comparison with the liquid-liquid process, the SPE process is more efficient, extracts a wider range of organic compounds, generates less pollution, and costs less.

  11. Automated extraction of urban trees from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Fan, W.; Chenglu, W.; Jonathan, L.

    2016-03-01

    This paper presents an automatic algorithm to localize and extract urban trees from mobile LiDAR point clouds. First, in order to reduce the number of points to be processed, the ground points are filtered out from the raw point clouds, and the un-ground points are segmented into supervoxels. Then, a novel localization method is proposed to locate the urban trees accurately. Next, a segmentation method by localization is proposed to achieve objects. Finally, the features of objects are extracted, and the feature vectors are classified by random forests trained on manually labeled objects. The proposed method has been tested on a point cloud dataset. The results prove that our algorithm efficiently extracts the urban trees.

  12. Analysis of betamethasone in rat plasma using automated solid-phase extraction coupled with liquid chromatography-tandem mass spectrometry. Determination of plasma concentrations in rat following oral and intravenous administration.

    PubMed

    Tamvakopoulos, C S; Neugebauer, J M; Donnelly, M; Griffin, P R

    2002-09-01

    A method is described for the determination of betamethasone in rat plasma by liquid chromatography-tandem mass spectrometry (LC-MS-MS). The analyte was recovered from plasma by solid-phase extraction and subsequently analyzed by LC-MS-MS. A Packard Multiprobe II, an automated liquid handling system, was employed for the preparation and extraction of a 96-well plate containing unknown plasma samples, standards and quality control samples in an automated fashion. Prednisolone, a structurally related steroid, was used as an internal standard. Using the described approach, a limit of quantitation of 2 ng/ml was achieved with a 50 microl aliquot of rat plasma. The described level of sensitivity allowed the determination of betamethasone concentrations and subsequent measurement of kinetic parameters of betamethasone in rat. Combination of automated plasma extraction and the sensitivity and selectivity of LC-MS-MS offers a valuable alternative to the methodologies currently used for the quantitation of steroids in biological fluids. PMID:12137997

  13. An automated system for retrieving herb-drug interaction related articles from MEDLINE

    PubMed Central

    Lin, Kuo; Friedman, Carol; Finkelstein, Joseph

    2016-01-01

    An automated, user-friendly and accurate system for retrieving herb-drug interaction (HDIs) related articles in MEDLINE can increase the safety of patients, as well as improve the physicians’ article retrieving ability regarding speed and experience. Previous studies show that MeSH based queries associated with negative effects of drugs can be customized, resulting in good performance in retrieving relevant information, but no study has focused on the area of herb-drug interactions (HDI). This paper adapted the characteristics of HDI related papers and created a multilayer HDI article searching system. It achieved a sensitivity of 92% at a precision of 93% in a preliminary evaluation. Instead of requiring physicians to conduct PubMed searches directly, this system applies a more user-friendly approach by employing a customized system that enhances PubMed queries, shielding users from having to write queries, dealing with PubMed, or reading many irrelevant articles. The system provides automated processes and outputs target articles based on the input. PMID:27570662

  14. An automated system for retrieving herb-drug interaction related articles from MEDLINE.

    PubMed

    Lin, Kuo; Friedman, Carol; Finkelstein, Joseph

    2016-01-01

    An automated, user-friendly and accurate system for retrieving herb-drug interaction (HDIs) related articles in MEDLINE can increase the safety of patients, as well as improve the physicians' article retrieving ability regarding speed and experience. Previous studies show that MeSH based queries associated with negative effects of drugs can be customized, resulting in good performance in retrieving relevant information, but no study has focused on the area of herb-drug interactions (HDI). This paper adapted the characteristics of HDI related papers and created a multilayer HDI article searching system. It achieved a sensitivity of 92% at a precision of 93% in a preliminary evaluation. Instead of requiring physicians to conduct PubMed searches directly, this system applies a more user-friendly approach by employing a customized system that enhances PubMed queries, shielding users from having to write queries, dealing with PubMed, or reading many irrelevant articles. The system provides automated processes and outputs target articles based on the input. PMID:27570662

  15. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  16. Kernel-Based Learning for Domain-Specific Relation Extraction

    NASA Astrophysics Data System (ADS)

    Basili, Roberto; Giannone, Cristina; Del Vescovo, Chiara; Moschitti, Alessandro; Naggar, Paolo

    In a specific process of business intelligence, i.e. investigation on organized crime, empirical language processing technologies can play a crucial role. The analysis of transcriptions on investigative activities, such as police interrogatories, for the recognition and storage of complex relations among people and locations is a very difficult and time consuming task, ultimately based on pools of experts. We discuss here an inductive relation extraction platform that opens the way to much cheaper and consistent workflows. The presented empirical investigation shows that accurate results, comparable to the expert teams, can be achieved, and parametrization allows to fine tune the system behavior for fitting domain-specific requirements.

  17. Automation of ⁹⁹Tc extraction by LOV prior ICP-MS detection: application to environmental samples.

    PubMed

    Rodríguez, Rogelio; Leal, Luz; Miranda, Silvia; Ferrer, Laura; Avivar, Jessica; García, Ariel; Cerdà, Víctor

    2015-02-01

    A new, fast, automated and inexpensive sample pre-treatment method for (99)Tc determination by inductively coupled plasma-mass spectrometry (ICP-MS) detection is presented. The miniaturized approach is based on a lab-on-valve (LOV) system, allowing automatic separation and preconcentration of (99)Tc. Selectivity is provided by the solid phase extraction system used (TEVA resin) which retains selectively pertechnetate ion in diluted nitric acid solution. The proposed system has some advantages such as minimization of sample handling, reduction of reagents volume, improvement of intermediate precision and sample throughput, offering a significant decrease of both time and cost per analysis in comparison to other flow techniques and batch methods. The proposed LOV system has been successfully applied to different samples of environmental interest (water and soil) with satisfactory recoveries, between 94% and 98%. The detection limit (LOD) of the developed method is 0.005 ng. The high durability of the resin and its low amount (32 mg), its good intermediate precision (RSD 3.8%) and repeatability (RSD 2%) and its high extraction frequency (up to 5 h(-1)) makes this method an inexpensive, high precision and fast tool for monitoring (99)Tc in environmental samples. PMID:25435232

  18. Fully automated Liquid Extraction-Based Surface Sampling and Ionization Using a Chip-Based Robotic Nanoelectrospray Platform

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2010-01-01

    A fully automated liquid extraction-based surface sampling device utilizing an Advion NanoMate chip-based infusion nanoelectrospray ionization system is reported. Analyses were enabled for discrete spot sampling by using the Advanced User Interface of the current commercial control software. This software interface provided the parameter control necessary for the NanoMate robotic pipettor to both form and withdraw a liquid microjunction for sampling from a surface. The system was tested with three types of analytically important sample surface types, viz., spotted sample arrays on a MALDI plate, dried blood spots on paper, and whole-body thin tissue sections from drug dosed mice. The qualitative and quantitative data were consistent with previous studies employing other liquid extraction-based surface sampling techniques. The successful analyses performed here utilized the hardware and software elements already present in the NanoMate system developed to handle and analyze liquid samples. Implementation of an appropriate sample (surface) holder, a solvent reservoir, faster movement of the robotic arm, finer control over solvent flow rate when dispensing and retrieving the solution at the surface, and the ability to select any location on a surface to sample from would improve the analytical performance and utility of the platform.

  19. Sequential automated fusion/extraction chromatography methodology for the dissolution of uranium in environmental samples for mass spectrometric determination.

    PubMed

    Milliard, Alex; Durand-Jézéquel, Myriam; Larivière, Dominic

    2011-01-17

    An improved methodology has been developed, based on dissolution by automated fusion followed by extraction chromatography for the detection and quantification of uranium in environmental matrices by mass spectrometry. A rapid fusion protocol (<8 min) was investigated for the complete dissolution of various samples. It could be preceded, if required, by an effective ashing procedure using the M4 fluxer and a newly designed platinum lid. Complete dissolution of the sample was observed and measured using standard reference materials (SRMs) and experimental data show no evidence of cross-contamination of crucibles when LiBO(2)/LiBr melts were used. The use of a M4 fusion unit also improved repeatability in sample preparation over muffle furnace fusion. Instrumental issues originating from the presence of high salt concentrations in the digestate after lithium metaborate fusion was also mitigated using an extraction chromatography (EXC) protocol aimed at removing lithium and interfering matrix constituants prior to the elution of uranium. The sequential methodology, which can be performed simultaneously on three samples, requires less than 20 min per sample for fusion and separation. It was successfully coupled to inductively coupled plasma mass spectrometry (ICP-MS) achieving detection limits below 100 pg kg(-1) for 5-300 mg of sample. PMID:21167982

  20. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data.

    PubMed

    Prieto, Sandra P; Lai, Keith K; Laryea, Jonathan A; Mizell, Jason S; Muldoon, Timothy J

    2016-04-01

    Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  1. Progress in automated extraction and purification of in situ 14C from quartz: Results from the Purdue in situ 14C laboratory

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Goehring, Brent; Wilson, Jim; Kubley, Thomas; Caffee, Marc

    2015-10-01

    Current extraction methods for in situ 14C from quartz [e.g., Lifton et al., (2001), Pigati et al., (2010), Hippe et al., (2013)] are time-consuming and repetitive, making them an attractive target for automation. We report on the status of in situ 14C extraction and purification systems originally automated at the University of Arizona that have now been reconstructed and upgraded at the Purdue Rare Isotope Measurement Laboratory (PRIME Lab). The Purdue in situ 14C laboratory builds on the flow-through extraction system design of Pigati et al. (2010), automating most of the procedure by retrofitting existing valves with external servo-controlled actuators, regulating the pressure of research purity O2 inside the furnace tube via a PID-based pressure controller in concert with an inlet mass flow controller, and installing an automated liquid N2 distribution system, all driven by LabView® software. A separate system for cryogenic CO2 purification, dilution, and splitting is also fully automated, ensuring a highly repeatable process regardless of the operator. We present results from procedural blanks and an intercomparison material (CRONUS-A), as well as results of experiments to increase the amount of material used in extraction, from the standard 5 g to 10 g or above. Results thus far are quite promising with procedural blanks comparable to previous work and significant improvements in reproducibility for CRONUS-A measurements. The latter analyses also demonstrate the feasibility of quantitative extraction of in situ 14C from sample masses up to 10 g. Our lab is now analyzing unknowns routinely, but lowering overall blank levels is the focus of ongoing research.

  2. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    PubMed Central

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-01-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method. PMID:26260921

  3. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-08-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.

  4. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  5. Automated extraction of clinical traits of multiple sclerosis in electronic medical records

    PubMed Central

    Davis, Mary F; Sriram, Subramaniam; Bush, William S; Denny, Joshua C; Haines, Jonathan L

    2013-01-01

    Objectives The clinical course of multiple sclerosis (MS) is highly variable, and research data collection is costly and time consuming. We evaluated natural language processing techniques applied to electronic medical records (EMR) to identify MS patients and the key clinical traits of their disease course. Materials and methods We used four algorithms based on ICD-9 codes, text keywords, and medications to identify individuals with MS from a de-identified, research version of the EMR at Vanderbilt University. Using a training dataset of the records of 899 individuals, algorithms were constructed to identify and extract detailed information regarding the clinical course of MS from the text of the medical records, including clinical subtype, presence of oligoclonal bands, year of diagnosis, year and origin of first symptom, Expanded Disability Status Scale (EDSS) scores, timed 25-foot walk scores, and MS medications. Algorithms were evaluated on a test set validated by two independent reviewers. Results We identified 5789 individuals with MS. For all clinical traits extracted, precision was at least 87% and specificity was greater than 80%. Recall values for clinical subtype, EDSS scores, and timed 25-foot walk scores were greater than 80%. Discussion and conclusion This collection of clinical data represents one of the largest databases of detailed, clinical traits available for research on MS. This work demonstrates that detailed clinical information is recorded in the EMR and can be extracted for research purposes with high reliability. PMID:24148554

  6. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system. PMID:24788888

  7. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    PubMed

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes. PMID:25561069

  8. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    He, Y.; Zhang, C.; Fraser, C. S.

    2014-08-01

    This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as such is an important data source for object reconstruction in spatial information systems. However, LiDAR points on building edges often exhibit a jagged pattern, partially due to either occlusion from neighbouring objects, such as overhanging trees, or to the nature of the data itself, including unavoidable noise and irregular point distributions. The explicit 3D reconstruction may thus result in irregular or incomplete building polygons. In the presented work, a vertex-driven Douglas-Peucker method is developed to generate polygonal hypotheses from points forming initial building outlines. The energy function is adopted to examine and evaluate each hypothesis and the optimal polygon is determined through energy minimization. The energy minimization also plays a key role in bridging gaps, where the building outlines are ambiguous due to insufficient LiDAR points. In formulating the energy function, hard constraints such as parallelism and perpendicularity of building edges are imposed, and local and global adjustments are applied. The developed approach has been extensively tested and evaluated on datasets with varying point cloud density over different terrain types. Results are presented and analysed. The successful reconstruction of building footprints, of varying structural complexity, along with a quantitative assessment employing accurate reference data, demonstrate the practical potential of the proposed approach.

  9. Validation of an automated solid-phase extraction method for the analysis of 23 opioids, cocaine, and metabolites in urine with ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ramírez Fernández, María del Mar; Van Durme, Filip; Wille, Sarah M R; di Fazio, Vincent; Kummer, Natalie; Samyn, Nele

    2014-06-01

    The aim of this work was to automate a sample preparation procedure extracting morphine, hydromorphone, oxymorphone, norcodeine, codeine, dihydrocodeine, oxycodone, 6-monoacetyl-morphine, hydrocodone, ethylmorphine, benzoylecgonine, cocaine, cocaethylene, tramadol, meperidine, pentazocine, fentanyl, norfentanyl, buprenorphine, norbuprenorphine, propoxyphene, methadone and 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine from urine samples. Samples were extracted by solid-phase extraction (SPE) with cation exchange cartridges using a TECAN Freedom Evo 100 base robotic system, including a hydrolysis step previous extraction when required. Block modules were carefully selected in order to use the same consumable material as in manual procedures to reduce cost and/or manual sample transfers. Moreover, the present configuration included pressure monitoring pipetting increasing pipetting accuracy and detecting sampling errors. The compounds were then separated in a chromatographic run of 9 min using a BEH Phenyl analytical column on a ultra-performance liquid chromatography-tandem mass spectrometry system. Optimization of the SPE was performed with different wash conditions and elution solvents. Intra- and inter-day relative standard deviations (RSDs) were within ±15% and bias was within ±15% for most of the compounds. Recovery was >69% (RSD < 11%) and matrix effects ranged from 1 to 26% when compensated with the internal standard. The limits of quantification ranged from 3 to 25 ng/mL depending on the compound. No cross-contamination in the automated SPE system was observed. The extracted samples were stable for 72 h in the autosampler (4°C). This method was applied to authentic samples (from forensic and toxicology cases) and to proficiency testing schemes containing cocaine, heroin, buprenorphine and methadone, offering fast and reliable results. Automation resulted in improved precision and accuracy, and a minimum operator intervention, leading to safer sample

  10. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  11. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    NASA Astrophysics Data System (ADS)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  12. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  13. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  14. A novel approach for automated shoreline extraction from remote sensing images using low level programming

    NASA Astrophysics Data System (ADS)

    Rigos, Anastasios; Vaiopoulos, Aristidis; Skianis, George; Tsekouras, George; Drakopoulos, Panos

    2015-04-01

    Tracking coastline changes is a crucial task in the context of coastal management and synoptic remotely sensed data has become an essential tool for this purpose. In this work, and within the framework of BeachTour project, we introduce a new method for shoreline extraction from high resolution satellite images. It was applied on two images taken by the WorldView-2 satellite (7 channels, 2m resolution) during July 2011 and August 2014. The location is the well-known tourist destination of Laganas beach spanning 5 km along the southern part of Zakynthos Island, Greece. The atmospheric correction was performed with the ENVI FLAASH procedure and the final images were validated against hyperspectral field measurements. Using three channels (CH2=blue, CH3=green and CH7=near infrared) the Modified Redness Index image was calculated according to: MRI=(CH7)2/[CH2x(CH3)3]. MRI has the property that its value keeps increasing as the water becomes shallower. This is followed by an abrupt reduction trend at the location of the wet sand up to the point where the dry shore face begins. After that it remains low-valued throughout the beach zone. Images based on this index were used for the shoreline extraction process that included the following steps: a) On the MRI based image, only an area near the shoreline was kept (this process is known as image masking). b) On the masked image the Canny edge detector operator was applied. c) Of all edges discovered on step (b) only the biggest was kept. d) If the line revealed on step (c) was unacceptable, i.e. not defining the shoreline or defining only part of it, then either more than one areas on step (c) were kept or on the MRI image the pixel values were bound in a particular interval [Blow, Bhigh] and only the ones belonging in this interval were kept. Then, steps (a)-(d) were repeated. Using this method, which is still under development, we were able to extract the shoreline position and reveal its changes during the 3-year period

  15. Automated on-line renewable solid-phase extraction-liquid chromatography exploiting multisyringe flow injection-bead injection lab-on-valve analysis.

    PubMed

    Quintana, José Benito; Miró, Manuel; Estela, José Manuel; Cerdà, Víctor

    2006-04-15

    In this paper, the third generation of flow injection analysis, also named the lab-on-valve (LOV) approach, is proposed for the first time as a front end to high-performance liquid chromatography (HPLC) for on-line solid-phase extraction (SPE) sample processing by exploiting the bead injection (BI) concept. The proposed microanalytical system based on discontinuous programmable flow features automated packing (and withdrawal after single use) of a small amount of sorbent (<5 mg) into the microconduits of the flow network and quantitative elution of sorbed species into a narrow band (150 microL of 95% MeOH). The hyphenation of multisyringe flow injection analysis (MSFIA) with BI-LOV prior to HPLC analysis is utilized for on-line postextraction treatment to ensure chemical compatibility between the eluate medium and the initial HPLC gradient conditions. This circumvents the band-broadening effect commonly observed in conventional on-line SPE-based sample processors due to the low eluting strength of the mobile phase. The potential of the novel MSFI-BI-LOV hyphenation for on-line handling of complex environmental and biological samples prior to reversed-phase chromatographic separations was assessed for the expeditious determination of five acidic pharmaceutical residues (viz., ketoprofen, naproxen, bezafibrate, diclofenac, and ibuprofen) and one metabolite (viz., salicylic acid) in surface water, urban wastewater, and urine. To this end, the copolymeric divinylbenzene-co-n-vinylpyrrolidone beads (Oasis HLB) were utilized as renewable sorptive entities in the micromachined unit. The automated analytical method features relative recovery percentages of >88%, limits of detection within the range 0.02-0.67 ng mL(-1), and coefficients of variation <11% for the column renewable mode and gives rise to a drastic reduction in operation costs ( approximately 25-fold) as compared to on-line column switching systems. PMID:16615800

  16. Automated structure extraction and XML conversion of life science database flat files.

    PubMed

    Philippi, Stephan; Köhler, Jacob

    2006-10-01

    In the light of the increasing number of biological databases, their integration is a fundamental prerequisite for answering complex biological questions. Database integration, therefore, is an important area of research in bioinformatics. Since most of the publicly available life science databases are still exclusively exchanged by means of proprietary flat files, database integration requires parsers for very different flat file formats. Unfortunately, the development and maintenance of database specific flat file parsers is a nontrivial and time-consuming task, which takes considerable effort in large-scale integration scenarios. This paper introduces heuristically based concepts for automatic structure extraction from life science database flat files. On the basis of these concepts the FlatEx prototype is developed for the automatic conversion of flat files into XML representations. PMID:17044405

  17. Robust semi-automated path extraction for visualising stenosis of the coronary arteries.

    PubMed

    Mueller, Daniel; Maeder, Anthony

    2008-09-01

    Computed tomography angiography (CTA) is useful for diagnosing and planning treatment of heart disease. However, contrast agent in surrounding structures (such as the aorta and left ventricle) makes 3D visualisation of the coronary arteries difficult. This paper presents a composite method employing segmentation and volume rendering to overcome this issue. A key contribution is a novel Fast Marching minimal path cost function for vessel centreline extraction. The resultant centreline is used to compute a measure of vessel lumen, which indicates the degree of stenosis (narrowing of a vessel). Two volume visualisation techniques are presented which utilise the segmented arteries and lumen measure. The system is evaluated and demonstrated using synthetic and clinically obtained datasets. PMID:18603408

  18. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  19. Dispersive liquid-liquid microextraction combined with semi-automated in-syringe back extraction as a new approach for the sample preparation of ionizable organic compounds prior to liquid chromatography.

    PubMed

    Melwanki, Mahaveer B; Fuh, Ming-Ren

    2008-07-11

    Dispersive liquid-liquid microextraction (DLLME) followed by a newly designed semi-automated in-syringe back extraction technique has been developed as an extraction methodology for the extraction of polar organic compounds prior to liquid chromatography (LC) measurement. The method is based on the formation of tiny droplets of the extractant in the sample solution using water-immiscible organic solvent (extractant) dissolved in a water-miscible organic dispersive solvent. Extraction of the analytes from aqueous sample into the dispersed organic droplets took place. The extracting organic phase was separated by centrifuging and the sedimented phase was withdrawn into a syringe. Then in-syringe back extraction was utilized to extract the analytes into an aqueous solution prior to LC analysis. Clenbuterol (CB), a basic organic compound used as a model, was extracted from a basified aqueous sample using 25 microL tetrachloroethylene (TCE, extraction solvent) dissolved in 500 microL acetone (as a dispersive solvent). After separation of the organic extracting phase by centrifuging, CB enriched in TCE phase was back extracted into 10 microL of 1% aqueous formic acid (FA) within the syringe. Back extraction was facilitated by repeatedly moving the plunger back and forth within the barrel of syringe, assisted by a syringe pump. Due to the plunger movement, a thin organic film is formed on the inner layer of the syringe that comes in contact with the acidic aqueous phase. Here, CB, a basic analyte, will be protonated and back extracted into FA. Various parameters affecting the extraction efficiency, viz., choice of extraction and dispersive solvent, salt effect, speed of syringe pump, back extraction time period, effect of concentration of base and acid, were evaluated. Under optimum conditions, precision, linearity (correlation coefficient, r(2)=0.9966 over the concentration range of 10-1000 ng mL(-1) CB), detection limit (4.9 ng mL(-1)), enrichment factor (175), relative

  20. Superheated liquid extraction of oleuropein and related biophenols from olive leaves.

    PubMed

    Japón-Luján, R; Luque de Castro, M D

    2006-12-15

    Oleuropein and other healthy olive biophenols (OBPs) such as verbacoside, apigenin-7-glucoside and luteolin-7-glucoside have been extracted from olive leaves by using superheated liquids and a static-dynamic approach. Multivariate methodology has been used to carry out a detailed optimisation of the extraction. Under the optimal working conditions, complete removal without degradation of the target analytes was achieved in 13 min. The extract was injected into a chromatograph-photodiode array detector assembly for individual separation-quantification. The proposed approach - which provides more concentrated extracts than previous alternatives - is very useful to study matrix-extractant analytes partition. In addition, the efficacy of superheated liquids to extract OBPs, the simplicity of the experimental setup, its easy automation and low acquisition and maintenance costs make the industrial implementation of the proposed method advisable. PMID:17045596

  1. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  2. Dried Blood Spot Proteomics: Surface Extraction of Endogenous Proteins Coupled with Automated Sample Preparation and Mass Spectrometry Analysis

    NASA Astrophysics Data System (ADS)

    Martin, Nicholas J.; Bunch, Josephine; Cooper, Helen J.

    2013-08-01

    Dried blood spots offer many advantages as a sample format including ease and safety of transport and handling. To date, the majority of mass spectrometry analyses of dried blood spots have focused on small molecules or hemoglobin. However, dried blood spots are a potentially rich source of protein biomarkers, an area that has been overlooked. To address this issue, we have applied an untargeted bottom-up proteomics approach to the analysis of dried blood spots. We present an automated and integrated method for extraction of endogenous proteins from the surface of dried blood spots and sample preparation via trypsin digestion by use of the Advion Biosciences Triversa Nanomate robotic platform. Liquid chromatography tandem mass spectrometry of the resulting digests enabled identification of 120 proteins from a single dried blood spot. The proteins identified cross a concentration range of four orders of magnitude. The method is evaluated and the results discussed in terms of the proteins identified and their potential use as biomarkers in screening programs.

  3. Automated and portable solid phase extraction platform for immuno-detection of 17β-estradiol in water.

    PubMed

    Heub, Sarah; Tscharner, Noe; Monnier, Véronique; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2015-02-13

    A fully automated and portable system for solid phase extraction (SPE) has been developed for the analysis of the natural hormone 17β-estradiol (E2) in environmental water by enzyme linked immuno-sorbent assay (ELISA). The system has been validated with de-ionized and artificial sea water as model samples and allowed for pre-concentration of E2 at levels of 1, 10 and 100 ng/L with only 100 ml of sample. Recoveries ranged from 24±3% to 107±6% depending on the concentration and sample matrix. The method successfully allowed us to determine the concentration of two seawater samples. A concentration of 15.1±0.3 ng/L of E2 was measured in a sample obtained from a food production process, and 8.8±0.7 ng/L in a sample from the Adriatic Sea. The system would be suitable for continuous monitoring of water quality as it is user friendly, and as the method is reproducible and totally compatible with the analysis of water sample by simple immunoassays and other detection methods such as biosensors. PMID:25604269

  4. Quantification of rosuvastatin in human plasma by automated solid-phase extraction using tandem mass spectrometric detection.

    PubMed

    Hull, C K; Penman, A D; Smith, C K; Martin, P D

    2002-06-01

    An assay employing automated solid-phase extraction (SPE) followed by high-performance liquid chromatography with positive ion TurboIonspray tandem mass spectrometry (LC-MS-MS) was developed and validated for the quantification of rosuvastatin (Crestor) in human plasma. Rosuvastatin is a hydroxy-methyl glutaryl coenzyme A reductase inhibitor currently under development by AstraZeneca. The standard curve range in human plasma was 0.1-30 ng/ml with a lower limit of quantification (LLOQ) verified at 0.1 ng/ml. Inaccuracy was less than 8% and imprecision less than +/-15% at all concentration levels. There was no interference from endogenous substances. The analyte was stable in human plasma following three freeze/thaw cycles and for up to 6 months following storage at both -20 and -70 degrees C. The assay was successfully applied to the analysis of rosuvastatin in human plasma samples derived from clinical trials, allowing the pharmacokinetics of the compound to be determined. PMID:12007766

  5. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models

    PubMed Central

    Chaddad, Ahmad

    2015-01-01

    This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. PMID:26136774

  6. Towards the automated geomorphometric extraction of talus slopes in Martian landscapes

    NASA Astrophysics Data System (ADS)

    Podobnikar, Tomaž; Székely, Balázs

    2015-01-01

    Terrestrial talus slopes are a common feature of mountainous environments. Their geomorphic form is determined by their being constituted of scree, or similar loose and often poorly sorted material. Martian talus slopes are governed by the different nature of the Martian environment, namely: weaker gravity, the wide availability of loose material, the lack of fluvial erosion and the typicality of large escarpments; all these factors make talus slopes a more striking areomorphic feature on Mars than on Earth. This paper concerns the development of a numerical geomorphometric analysis, parameterization and detection of the talus slopes method. We design inventive variables, a multidirectional visibility index (MVI) and a relief above (RA) and propose two techniques of talus slope extraction: ISOcluster and progressive Boolean overlay. Our Martian digital terrain model (DTM) was derived from the ESA Mars Express HRSC imagery, with a resolution of 50 m. The method was tested in the study areas of Nanedi Valles and West Candor Chasma. The major challenge concerned the quality of the DTM. The selection of robust variables was therefore crucial. Our final model is to a certain degree DTM-error tolerant. The results show that the method is selective concerning those slopes that can be considered to constitute a talus slopes area, according to the visual interpretation of HRSC images. Based on an analysis of the DTM, it is possible to infer various geological properties and geophysical processes of the Martian and terrestrial environments; this has a range of applications, such as natural hazard risk management.

  7. Development of an automated method for Folin-Ciocalteu total phenolic assay in artichoke extracts.

    PubMed

    Yoo, Kil Sun; Lee, Eun Jin; Leskovar, Daniel; Patil, Bhimanagouda S

    2012-12-01

    We developed a system to run the Folin-Ciocalteu (F-C) total phenolic assay, in artichoke extract samples, which is fully automatic, consistent, and fast. The system uses 2 high performance liquid chromatography (HPLC) pumps, an autosampler, a column heater, a UV/Vis detector, and a data collection system. To test the system, a pump delivered 10-fold diluted F-C reagent solution at a rate of 0.7 mL/min, and 0.4 g/mL sodium carbonate at a rate of 2.1 mL/min. The autosampler injected 10 μL per 1.2 min, which was mixed with the F-C reagent and heated to 65 °C while it passed through the column heater. The heated reactant was mixed with sodium carbonate and color intensity was measured by the detector at 600 nm. The data collection system recorded the color intensity, and peak area of each sample was calculated as the concentration of the total phenolic content, expressed in μg/mL as either chlorogenic acid or gallic acid. This new method had superb repeatability (0.7% CV) and a high correlation with both the manual method (r(2) = 0.93) and the HPLC method (r(2) = 0.78). Ascorbic acid and quercetin showed variable antioxidant activity, but sugars did not. This method can be efficiently applied to research that needs to test many numbers of antioxidant capacity samples with speed and accuracy. PMID:23163965

  8. Bisphosphonate-Related Osteonecrosis of the Jaw After Tooth Extraction.

    PubMed

    Ribeiro, Ney Robson Bezerra; Silva, Leonardo de Freitas; Santana, Diego Matos; Nogueira, Renato Luiz Maia

    2015-10-01

    Bisphosphonates are widely used for treatment or prevention of bone diseases characterized by high osteoclastic activity. Among the oral medicines used to treat osteoporosis, alendronate has been often used. Despite of the low rate of complications on its use, cases of osteonecrosis of the jaw have been reported on literature after tooth extractions. The main symptoms include pain, tooth mobility, swelling, erythema, and ulceration. The risk factors related to osteonecrosis of the jaw associated with bisphosphonate are exposition time to the medicine, routes of administration, and oral surgical procedures performed. The aim of this work is to report a case of a patient showing osteonecrosis of the jaw associated with the use of oral bisphosphonates after tooth extractions. The patient was treated through the suspension of the alendronate with the removal of the necrotic tissue and the foci of infection. After a year's follow-up, the patient showed no recurrence signs. From the foregoing, the interruption of the alendronate use and the surgical treatment associated to antibiotic therapy showed effective on the patient's treatment. PMID:26468839

  9. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  10. Path duplication using GPS carrier based relative position for automated ground vehicle convoys

    NASA Astrophysics Data System (ADS)

    Travis, William E., III

    A GPS based automated convoy strategy to duplicate the path of a lead vehicle is presented in this dissertation. Laser scanners and cameras are not used; all information available comes from GPS or inertial systems. An algorithm is detailed that uses GPS carrier phase measurements to determine relative position between two moving ground vehicles. Error analysis shows the accuracy is centimeter level. It is shown that the time to the first solution fix is dependent upon initial relative position accuracy, and that near instantaneous fixes can be realized if that accuracy is less than 20 centimeters. The relative positioning algorithm is then augmented with inertial measurement units to dead reckon through brief outages. Performance analysis of automotive and tactical grade units shows the twenty centimeter threshold can be maintained for only a few seconds with the automotive grade unit and for 14 seconds with the tactical unit. Next, techniques to determine odometry information in vector form are discussed. Three methods are outlined: dead reckoning of inertial sensors, time differencing GPS carrier measurements to determine change in platform position, and aiding the time differenced carrier measurements with inertial measurements. Partial integration of a tactical grade inertial measurement unit provided the lowest error drift for the scenarios investigated, but the time differenced carrier phase approach provided the most cost feasible approach with similar accuracy. Finally, the relative position and odometry algorithms are used to generate a reference by which an automated following vehicle can replicate a lead vehicle's path of travel. The first method presented uses only the relative position information to determine a relative angle to the leader. Using the relative angle as a heading reference for a steering control causes the follower to drive at the lead vehicle, thereby creating a towing effect on the follower when both vehicles are in motion. Effective

  11. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 1: Approaches based on extractant drop-, plug-, film- and microflow-formation.

    PubMed

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-01

    Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. PMID:26772123

  12. Automated liquid-liquid extraction workstation for library synthesis and its use in the parallel and chromatography-free synthesis of 2-alkyl-3-alkyl-4-(3H)-quinazolinones.

    PubMed

    Carpintero, Mercedes; Cifuentes, Marta; Ferritto, Rafael; Haro, Rubén; Toledo, Miguel A

    2007-01-01

    An automated liquid-liquid extraction workstation has been developed. This module processes up to 96 samples in an automated and parallel mode avoiding the time-consuming and intensive sample manipulation during the workup process. To validate the workstation, a highly automated and chromatography-free synthesis of differentially substituted quinazolin-4(3H)-ones with two diversity points has been carried out using isatoic anhydride as starting material. PMID:17645313

  13. A Logic-Based Approach to Relation Extraction from Texts

    NASA Astrophysics Data System (ADS)

    Horváth, Tamás; Paass, Gerhard; Reichartz, Frank; Wrobel, Stefan

    In recent years, text mining has moved far beyond the classical problem of text classification with an increased interest in more sophisticated processing of large text corpora, such as, for example, evaluations of complex queries. This and several other tasks are based on the essential step of relation extraction. This problem becomes a typical application of learning logic programs by considering the dependency trees of sentences as relational structures and examples of the target relation as ground atoms of a target predicate. In this way, each example is represented by a definite first-order Horn-clause. We show that an adaptation of Plotkin's least general generalization (LGG) operator can effectively be applied to such clauses and propose a simple and effective divide-and-conquer algorithm for listing a certain set of LGGs. We use these LGGs to generate binary features and compute the hypothesis by applying SVM to the feature vectors obtained. Empirical results on the ACE-2003 benchmark dataset indicate that the performance of our approach is comparable to state-of-the-art kernel methods.

  14. Automated extraction of pressure ridges from SAR images of sea ice - Comparison with surface truth

    NASA Technical Reports Server (NTRS)

    Vesecky, J. F.; Smith, M. P.; Samadani, R.; Daida, J. M.; Comiso, J. C.

    1991-01-01

    The authors estimate the characteristics of ridges and leads in sea ice from SAR (synthetic aperture radar) images. Such estimates are based on the hypothesis that bright filamentary features in SAR sea ice images correspond with pressure ridges. A data set collected in the Greenland Sea in 1987 allows this hypothesis to be evaluated for X-band SAR images. A preliminary analysis of data collected from SAR images and ice elevation (from a laser altimeter) is presented. It is found that SAR image brightness and ice elevation are clearly related. However, the correlation, using the data and techniques applied, is not strong.

  15. Performance verification of the Maxwell 16 Instrument and DNA IQ Reference Sample Kit for automated DNA extraction of known reference samples.

    PubMed

    Krnajski, Z; Geering, S; Steadman, S

    2007-12-01

    Advances in automation have been made for a number of processes conducted in the forensic DNA laboratory. However, because most robotic systems are designed for high-throughput laboratories batching large numbers of samples, smaller laboratories are left with a limited number of cost-effective options for employing automation. The Maxwell 16 Instrument and DNA IQ Reference Sample Kit marketed by Promega are designed for rapid, automated purification of DNA extracts from sample sets consisting of sixteen or fewer samples. Because the system is based on DNA capture by paramagnetic particles with maximum binding capacity, it is designed to generate extracts with yield consistency. The studies herein enabled evaluation of STR profile concordance, consistency of yield, and cross-contamination performance for the Maxwell 16 Instrument. Results indicate that the system performs suitably for streamlining the process of extracting known reference samples generally used for forensic DNA analysis and has many advantages in a small or moderate-sized laboratory environment. PMID:25869266

  16. Medication Incidents Related to Automated Dose Dispensing in Community Pharmacies and Hospitals - A Reporting System Study

    PubMed Central

    Cheung, Ka-Chun; van den Bemt, Patricia M. L. A.; Bouvy, Marcel L.; Wensing, Michel; De Smet, Peter A. G. M.

    2014-01-01

    Introduction Automated dose dispensing (ADD) is being introduced in several countries and the use of this technology is expected to increase as a growing number of elderly people need to manage their medication at home. ADD aims to improve medication safety and treatment adherence, but it may introduce new safety issues. This descriptive study provides insight into the nature and consequences of medication incidents related to ADD, as reported by healthcare professionals in community pharmacies and hospitals. Methods The medication incidents that were submitted to the Dutch Central Medication incidents Registration (CMR) reporting system were selected and characterized independently by two researchers. Main Outcome Measures Person discovering the incident, phase of the medication process in which the incident occurred, immediate cause of the incident, nature of incident from the healthcare provider's perspective, nature of incident from the patient's perspective, and consequent harm to the patient caused by the incident. Results From January 2012 to February 2013 the CMR received 15,113 incidents: 3,685 (24.4%) incidents from community pharmacies and 11,428 (75.6%) incidents from hospitals. Eventually 1 of 50 reported incidents (268/15,113 = 1.8%) were related to ADD; in community pharmacies more incidents (227/3,685 = 6.2%) were related to ADD than in hospitals (41/11,428 = 0.4%). The immediate cause of an incident was often a change in the patient's medicine regimen or relocation. Most reported incidents occurred in two phases: entering the prescription into the pharmacy information system and filling the ADD bag. Conclusion A proportion of incidents was related to ADD and is reported regularly, especially by community pharmacies. In two phases, entering the prescription into the pharmacy information system and filling the ADD bag, most incidents occurred. A change in the patient's medicine regimen or relocation was the immediate causes of an incident

  17. Evaluation of Three Automated Nucleic Acid Extraction Systems for Identification of Respiratory Viruses in Clinical Specimens by Multiplex Real-Time PCR

    PubMed Central

    Kwon, Aerin; Lee, Kyung-A

    2014-01-01

    A total of 84 nasopharyngeal swab specimens were collected from 84 patients. Viral nucleic acid was extracted by three automated extraction systems: QIAcube (Qiagen, Germany), EZ1 Advanced XL (Qiagen), and MICROLAB Nimbus IVD (Hamilton, USA). Fourteen RNA viruses and two DNA viruses were detected using the Anyplex II RV16 Detection kit (Seegene, Republic of Korea). The EZ1 Advanced XL system demonstrated the best analytical sensitivity for all the three viral strains. The nucleic acids extracted by EZ1 Advanced XL showed higher positive rates for virus detection than the others. Meanwhile, the MICROLAB Nimbus IVD system was comprised of fully automated steps from nucleic extraction to PCR setup function that could reduce human errors. For the nucleic acids recovered from nasopharyngeal swab specimens, the QIAcube system showed the fewest false negative results and the best concordance rate, and it may be more suitable for detecting various viruses including RNA and DNA virus strains. Each system showed different sensitivity and specificity for detection of certain viral pathogens and demonstrated different characteristics such as turnaround time and sample capacity. Therefore, these factors should be considered when new nucleic acid extraction systems are introduced to the laboratory. PMID:24868527

  18. The ValleyMorph Tool: An automated extraction tool for transverse topographic symmetry (T-) factor and valley width to valley height (Vf-) ratio

    NASA Astrophysics Data System (ADS)

    Daxberger, Heidi; Dalumpines, Ron; Scott, Darren M.; Riller, Ulrich

    2014-09-01

    In tectonically active regions on Earth, shallow-crustal deformation associated with seismic hazards may pose a threat to human life and property. The study of landform development, such as analysis of the valley width to valley height ratio (Vf-ratio) and the Transverse Topographic Symmetry Factor (T-factor), delineating drainage basin symmetry, can be used as a relative measure of tectonic activity along fault-bound mountain fronts. The fast evolution of digital elevation models (DEM) provides an ideal base for remotely-sensed tectonomorphic studies of large areas using Geographical Information Systems (GIS). However, a manual extraction of the above mentioned morphologic parameters may be tedious and very time consuming. Moreover, basic GIS software suites do not provide the necessary built-in functions. Therefore, we present a newly developed, Python based, ESRI ArcGIS compatible tool and stand-alone script, the ValleyMorph Tool. This tool facilitates an automated extraction of the Vf-ratio and the T-factor data for large regions. Using a digital elevation raster and watershed polygon files as input, the tool provides output in the form of several ArcGIS data tables and shapefiles, ideal for further data manipulation and computation. This coding enables an easy application among the ArcGIS user community and code conversion to earlier ArcGIS versions. The ValleyMorph Tool is easy to use due to a simple graphical user interface. The tool is tested for the southern Central Andes using a total of 3366 watersheds.

  19. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  20. Submicrometric Magnetic Nanoporous Carbons Derived from Metal-Organic Frameworks Enabling Automated Electromagnet-Assisted Online Solid-Phase Extraction.

    PubMed

    Frizzarin, Rejane M; Palomino Cabello, Carlos; Bauzà, Maria Del Mar; Portugal, Lindomar A; Maya, Fernando; Cerdà, Víctor; Estela, José M; Turnes Palomino, Gemma

    2016-07-19

    We present the first application of submicrometric magnetic nanoporous carbons (μMNPCs) as sorbents for automated solid-phase extraction (SPE). Small zeolitic imidazolate framework-67 crystals are obtained at room temperature and directly carbonized under an inert atmosphere to obtain submicrometric nanoporous carbons containing magnetic cobalt nanoparticles. The μMNPCs have a high contact area, high stability, and their preparation is simple and cost-effective. The prepared μMNPCs are exploited as sorbents in a microcolumn format in a sequential injection analysis (SIA) system with online spectrophotometric detection, which includes a specially designed three-dimensional (3D)-printed holder containing an automatically actuated electromagnet. The combined action of permanent magnets and an automatically actuated electromagnet enabled the movement of the solid bed of particles inside the microcolumn, preventing their aggregation, increasing the versatility of the system, and increasing the preconcentration efficiency. The method was optimized using a full factorial design and Doehlert Matrix. The developed system was applied to the determination of anionic surfactants, exploiting the retention of the ion-pairs formed with Methylene Blue on the μMNPC. Using sodium dodecyl sulfate as a model analyte, quantification was linear from 50 to 1000 μg L(-1), and the detection limit was equal to 17.5 μg L(-1), the coefficient of variation (n = 8; 100 μg L(-1)) was 2.7%, and the analysis throughput was 13 h(-1). The developed approach was applied to the determination of anionic surfactants in water samples (natural water, groundwater, and wastewater), yielding recoveries of 93% to 110% (95% confidence level). PMID:27336802

  1. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  2. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes

    PubMed Central

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a ‘dark matter.’ We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about

  3. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes.

    PubMed

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a 'dark matter.' We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about the

  4. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, Bruce A.; McDowell, W. J.

    1990-01-01

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  5. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, B.A.; McDowell, W.J.

    1987-10-23

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  6. Method for extracting copper, silver and related metals

    SciTech Connect

    Moyer, B.A.; McDowell, W.J.

    1990-05-22

    This patent describes a process for selectively extracting precious metal such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  7. Extraction of a group-pair relation: problem-solving relation from web-board documents.

    PubMed

    Pechsiri, Chaveevan; Piriyakul, Rapepun

    2016-01-01

    This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy. PMID:27540498

  8. Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential P300 component

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J 3rd; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    The present study examined the effects of an electroencephalographic- (EEG-) based system for adaptive automation on tracking performance and workload. In addition, event-related potentials (ERPs) to a secondary task were derived to determine whether they would provide an additional degree of workload specificity. Participants were run in an adaptive automation condition, in which the system switched between manual and automatic task modes based on the value of each individual's own EEG engagement index; a yoked control condition; or another control group, in which task mode switches followed a random pattern. Adaptive automation improved performance and resulted in lower levels of workload. Further, the P300 component of the ERP paralleled the sensitivity to task demands of the performance and subjective measures across conditions. These results indicate that it is possible to improve performance with a psychophysiological adaptive automation system and that ERPs may provide an alternative means for distinguishing among levels of cognitive task demand in such systems. Actual or potential applications of this research include improved methods for assessing operator workload and performance.

  9. Determination of Low Concentrations of Acetochlor in Water by Automated Solid-Phase Extraction and Gas Chromatography with Mass-Selective Detection

    USGS Publications Warehouse

    Lindley, C.E.; Stewart, J.T.; Sandstrom, M.W.

    1996-01-01

    A sensitive and reliable gas chromatographic/mass spectrometric (GC/MS) method for determining acetochlor in environmental water samples was developed. The method involves automated extraction of the herbicide from a filtered 1 L water sample through a C18 solid-phase extraction column, elution from the column with hexane-isopropyl alcohol (3 + 1), and concentration of the extract with nitrogen gas. The herbicide is quantitated by capillary/column GC/MS with selected-ion monitoring of 3 characteristic ions. The single-operator method detection limit for reagent water samples is 0.0015 ??g/L. Mean recoveries ranged from about 92 to 115% for 3 water matrixes fortified at 0.05 and 0.5 ??g/L. Average single-operator precision, over the course of 1 week, was better than 5%.

  10. On the Relation between Automated Essay Scoring and Modern Views of the Writing Construct

    ERIC Educational Resources Information Center

    Deane, Paul

    2013-01-01

    This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…

  11. Automated Feature Extraction and Hydrocode Modeling of Impact Related Structures on Mars: Preliminary Report

    NASA Astrophysics Data System (ADS)

    Plesko, C. S.; Asphaug, E.; Brumby, S. P.; Gisler, G. R.

    2003-07-01

    A systematic, combined modeling and observation effort to correlate Martian impact structures craters and their regional aftermaths to the impactors, impact processes and target geologies responsible.

  12. Time-resolved Characterization of Particle Associated Polycyclic Aromatic Hydrocarbons using a newly-developed Sequential Spot Sampler with Automated Extraction and Analysis

    PubMed Central

    Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-01-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50–100μl volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 hrs, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 hrs and used to assess the new system’s performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2=0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter. PMID:25574151

  13. Time-resolved characterization of particle associated polycyclic aromatic hydrocarbons using a newly-developed sequential spot sampler with automated extraction and analysis

    NASA Astrophysics Data System (ADS)

    Eiguren-Fernandez, Arantzazu; Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-10-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50-100 μL volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 h, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 h and used to assess the new system's performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2 = 0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter.

  14. Automated mini-column solid-phase extraction cleanup for high-throughput analysis of chemical contaminants in foods by low-pressure gas chromatography – tandem mass spectrometry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...

  15. Toward automated classification of consumers' cancer-related questions with a new taxonomy of expected answer types.

    PubMed

    McRoy, Susan; Jones, Sean; Kurmally, Adam

    2016-09-01

    This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. PMID:25759063

  16. Extraction of gene-disease relations from Medline using domain dictionaries and machine learning.

    PubMed

    Chun, Hong-Woo; Tsuruoka, Yoshimasa; Kim, Jin-Dong; Shiba, Rie; Nagata, Naoki; Hishiki, Teruyoshi; Tsujii, Jun'ichi

    2006-01-01

    We describe a system that extracts disease-gene relations from Medline. We constructed a dictionary for disease and gene names from six public databases and extracted relation candidates by dictionary matching. Since dictionary matching produces a large number of false positives, we developed a method of machine learning-based named entity recognition (NER) to filter out false recognitions of disease/gene names. We found that the performance of relation extraction is heavily dependent upon the performance of NER filtering and that the filtering improves the precision of relation extraction by 26.7% at the cost of a small reduction in recall. PMID:17094223

  17. Automated solid-phase extraction for the determination of polybrominated diphenyl ethers and polychlorinated biphenyls in serum--application on archived Norwegian samples from 1977 to 2003.

    PubMed

    Thomsen, Cathrine; Liane, Veronica Horpestad; Becher, Georg

    2007-02-01

    An analytical method comprised of automated solid-phase extraction and determination using gas chromatography mass spectrometry (single quadrupole) has been developed for the determination of 12 polybrominated diphenyl ethers (PBDEs), 26 polychlorinated biphenyls (PCBs), two organochlorine compounds (OCs) (hexachlorobenzene and octachlorostyrene) and two brominated phenols (pentabromophenol, and tetrabromobisphenol-A (TBBP-A)). The analytes were extracted using a sorbent of polystyrene-divinylbenzene and an additional clean-up was performed on a sulphuric acid-silica column to remove lipids. The method has been validated by spiking horse serum at five levels. The mean accuracy given as recovery relative to internal standards was 95%, 99%, 93% and 109% for the PBDEs PCBs, OCs and brominated phenols, respectively. The mean repeatability given as RSDs was respectively 6.9%, 8.7%, 7.5% and 15%. Estimated limits of detection (S/N=3) were in the range 0.2-1.8 pg/g serum for the PBDEs and phenols, and from 0.1 pg/g to 56 pg/g serum for the PCBs and OCs. The validated method has been used to investigate the levels of PBDEs and PCBs in 21 pooled serum samples from the general Norwegian population. In serum from men (age 40-50 years) the sum of seven PBDE congeners (IUPAC No. 28, 47, 99, 100, 153, 154 and 183) increased from 1977 (0.5 ng/g lipids) to 1998 (4.8 ng/g lipids). From 1999 to 2003 the concentration of PBDEs seems to have stabilised. On the other hand, the sum of five PCBs (IUPAC No. 101, 118, 138, 153 and 180) in these samples decreased steadily from 1977 (666 ng/g lipids) to 2003 (176 ng/g lipids). Tetrabromobisphenol-A and BDE-209 were detected in almost all samples, but no similar temporal trends to that seen for the PBDEs were observed for these compounds, which might be due to the short half-lives of these brominated flame retardants (FR) in humans. PMID:17023223

  18. Automated position control of a surface array relative to a liquid microjunction surface sampler

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James

    2007-11-13

    A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.

  19. A fully automated method for simultaneous determination of aflatoxins and ochratoxin A in dried fruits by pressurized liquid extraction and online solid-phase extraction cleanup coupled to ultra-high-pressure liquid chromatography-tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca

    2015-04-01

    According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results. PMID:25694147

  20. Automated age-related macular degeneration classification in OCT using unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Venhuizen, Freerk G.; van Ginneken, Bram; Bloemen, Bart; van Grinsven, Mark J. J. P.; Philipsen, Rick; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2015-03-01

    Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people. The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss. Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression. However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss. In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy. The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina. The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained. Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984. Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.

  1. Automated flow-based anion-exchange method for high-throughput isolation and real-time monitoring of RuBisCO in plant extracts.

    PubMed

    Suárez, Ruth; Miró, Manuel; Cerdà, Víctor; Perdomo, Juan Alejandro; Galmés, Jeroni

    2011-06-15

    In this work, a miniaturized, completely enclosed multisyringe-flow system is proposed for high-throughput purification of RuBisCO from Triticum aestivum extracts. The automated method capitalizes on the uptake of the target protein at 4°C onto Q-Sepharose Fast Flow strong anion-exchanger packed in a cylindrical microcolumn (105 × 4 mm) followed by a stepwise ionic-strength gradient elution (0-0.8 mol/L NaCl) to eliminate concomitant extract components and retrieve highly purified RuBisCO. The manifold is furnished downstream with a flow-through diode-array UV/vis spectrophotometer for real-time monitoring of the column effluent at the protein-specific wavelength of 280 nm to detect the elution of RuBisCO. Quantitation of RuBisCO and total soluble proteins in the eluate fractions were undertaken using polyacrylamide gel electrophoresis (PAGE) and the spectrophotometric Bradford assay, respectively. A comprehensive investigation of the effect of distinct concentration gradients on the isolation of RuBisCO and experimental conditions (namely, type of resin, column dimensions and mobile-phase flow rate) upon column capacity and analyte breakthrough was effected. The assembled set-up was aimed to critically ascertain the efficiency of preliminary batchwise pre-treatments of crude plant extracts (viz., polyethylenglycol (PEG) precipitation, ammonium sulphate precipitation and sucrose gradient centrifugation) in terms of RuBisCO purification and absolute recovery prior to automated anion-exchange column separation. Under the optimum physical and chemical conditions, the flow-through column system is able to admit crude plant extracts and gives rise to RuBisCO purification yields better than 75%, which might be increased up to 96 ± 9% with a prior PEG fractionation followed by sucrose gradient step. PMID:21641435

  2. Automated Retinal Image Analysis for Evaluation of Focal Hyperpigmentary Changes in Intermediate Age-Related Macular Degeneration

    PubMed Central

    Schmitz-Valckenberg, Steffen; Göbel, Arno P.; Saur, Stefan C.; Steinberg, Julia S.; Thiele, Sarah; Wojek, Christian; Russmann, Christoph; Holz, Frank G.; for the MODIAMD-Study Group

    2016-01-01

    Purpose To develop and evaluate a software tool for automated detection of focal hyperpigmentary changes (FHC) in eyes with intermediate age-related macular degeneration (AMD). Methods Color fundus (CFP) and autofluorescence (AF) photographs of 33 eyes with FHC of 28 AMD patients (mean age 71 years) from the prospective longitudinal natural history MODIAMD-study were included. Fully automated to semiautomated registration of baseline to corresponding follow-up images was evaluated. Following the manual circumscription of individual FHC (four different readings by two readers), a machine-learning algorithm was evaluated for automatic FHC detection. Results The overall pixel distance error for the semiautomated (CFP follow-up to CFP baseline: median 5.7; CFP to AF images from the same visit: median 6.5) was larger as compared for the automated image registration (4.5 and 5.7; P < 0.001 and P < 0.001). The total number of manually circumscribed objects and the corresponding total size varied between 637 to 1163 and 520,848 pixels to 924,860 pixels, respectively. Performance of the learning algorithms showed a sensitivity of 96% at a specificity level of 98% using information from both CFP and AF images and defining small areas of FHC (“speckle appearance”) as “neutral.” Conclusions FHC as a high-risk feature for progression of AMD to late stages can be automatically assessed at different time points with similar sensitivity and specificity as compared to manual outlining. Upon further development of the research prototype, this approach may be useful both in natural history and interventional large-scale studies for a more refined classification and risk assessment of eyes with intermediate AMD. Translational Relevance Automated FHC detection opens the door for a more refined and detailed classification and risk assessment of eyes with intermediate AMD in both natural history and future interventional studies. PMID:26966639

  3. Automated extraction method for the center line of spinal canal and its application to the spinal curvature quantification in torso X-ray CT images

    NASA Astrophysics Data System (ADS)

    Hayashi, Tatsuro; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Miyamoto, Kei; Kobayashi, Tatsunori; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2010-03-01

    X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature on CT images.

  4. Determination of amlodipine in human plasma using automated online solid-phase extraction HPLC-tandem mass spectrometry: application to a bioequivalence study of Chinese volunteers.

    PubMed

    Shentu, Jianzhong; Fu, Lizhi; Zhou, Huili; Hu, Xing Jiang; Liu, Jian; Chen, Junchun; Wu, Guolan

    2012-11-01

    An automated method (XLC-MS/MS) that uses online solid-phase extraction coupled with HPLC-tandem mass spectrometry was reported here for the first time to quantify amlodipine in human plasma. Automated pre-purification of plasma was performed using 10 mm × 2 mm HySphere C8 EC-SE online solid-phase extraction cartridges. After being eluted from the cartridge, the analyte and the internal standard were separated by HPLC and detected by tandem mass spectrometry. Mass spectrometric detection was achieved in the multiple reaction monitoring mode using a quadrupole tandem mass spectrometer in the positive electrospray ionization mode. The XLC-MS/MS method was validated and yielded excellent specificity. The calibration curve ranged from 0.10 to 10.22 ng/mL, and both the intra- and inter-day precision and accuracy values were within 8%. This method proved to be less laborious and was faster per analysis (high-throughput) than offline sample preparation methods. This method has been successfully applied in clinical pharmacokinetic and bioequivalence analyses. PMID:22770846

  5. High-throughput method of dioxin analysis in aqueous samples using consecutive solid phase extraction steps with the new C18 Ultraflow™ pressurized liquid extraction and automated clean-up.

    PubMed

    Youn, Yeu-Young; Park, Deok Hie; Lee, Yeon Hwa; Lim, Young Hee; Cho, Hye Sung

    2015-01-01

    A high-throughput analytical method has been developed for the determination of seventeen 2,3,7,8-substituted congeners of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in aqueous samples. A recently introduced octadecyl (C18) disk for semi-automated solid-phase extraction of PCDD/Fs in water samples with a high level of particulate material has been tested for the analysis of dioxins. A new type of C18 disk specially designed for the analysis of hexane extractable material (HEM), but never previously reported for use in PCDD/Fs analysis. This kind of disk allows a higher filtration flow, and therefore the time of analysis is reduced. The solid-phase extraction technique is used to change samples from liquid to solid, and therefore pressurized liquid extraction (PLE) can be used in the pre-treatment. In order to achieve efficient purification, extracts from the PLE are purified using an automated Power-prep system with disposable silica, alumina, and carbon columns. Quantitative analyses of PCDD/Fs were performed by GC-HRMS using multi-ion detection (MID) mode. The method was successfully applied to the analysis of water samples from the wastewater treatment system of a vinyl chloride monomer plant. The entire procedure is in agreement with EPA1613 recommendations regarding the blank control, MDLs (method detection limits), accuracy, and precision. The high-throughput method not only meets the requirements of international standards, but also shortens the required analysis time from 2 weeks to 3d. PMID:25112208

  6. Three Experiments Examining the Use of Electroencephalogram,Event-Related Potentials, and Heart-Rate Variability for Real-Time Human-Centered Adaptive Automation Design

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Parasuraman, Raja; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    Adaptive automation represents an advanced form of human-centered automation design. The approach to automation provides for real-time and model-based assessments of human-automation interaction, determines whether the human has entered into a hazardous state of awareness and then modulates the task environment to keep the operator in-the-loop , while maintaining an optimal state of task engagement and mental alertness. Because adaptive automation has not matured, numerous challenges remain, including what the criteria are, for determining when adaptive aiding and adaptive function allocation should take place. Human factors experts in the area have suggested a number of measures including the use of psychophysiology. This NASA Technical Paper reports on three experiments that examined the psychophysiological measures of event-related potentials, electroencephalogram, and heart-rate variability for real-time adaptive automation. The results of the experiments confirm the efficacy of these measures for use in both a developmental and operational role for adaptive automation design. The implications of these results and future directions for psychophysiology and human-centered automation design are discussed.

  7. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  8. Automated extraction of aorta and pulmonary artery in mediastinum from 3D chest x-ray CT images without contrast medium

    NASA Astrophysics Data System (ADS)

    Kitasaka, Takayuki; Mori, Kensaku; Hasegawa, Jun-ichi; Toriwaki, Jun-ichiro; Katada, Kazuhiro

    2002-05-01

    This paper proposes a method for automated extraction of the aorta and pulmonary artery (PA) in the mediastinum of the chest from uncontrasted chest X-ray CT images. The proposed method employs a model fitting technique to use shape features of blood vessels for extraction. First, edge voxels are detected based on the standard deviation of CT values. A likelihood image, which shows the degree of likelihood on medial axes of vessels, are calculated by applying the Euclidean distance transformation to non-edge voxels. Second, the medial axis of each vessel is obtained by fitting the model. This is done by referring the likelihood image. Finally, the aorta and PA areas are recovered from the medial axes by executing the reverse Euclidean distance transformation. We applied the proposed method to seven cases of uncontrasted chest X-ray CT images and evaluated the results by calculating the coincidence index computed from the extracted regions and the regions manually traced. Experimental results showed that the extracted aorta and the PA areas coincides with manually input regions with the coincidence indexes values 90% and 80-90%,respectively.

  9. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice

  10. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  11. Simultaneous analysis of organochlorinated pesticides (OCPs) and polychlorinated biphenyls (PCBs) from marine samples using automated pressurized liquid extraction (PLE) and Power Prep™ clean-up.

    PubMed

    Helaleh, Murad I H; Al-Rashdan, Amal; Ibtisam, A

    2012-05-30

    An automated pressurized liquid extraction (PLE) method followed by Power Prep™ clean-up was developed for organochlorinated pesticide (OCP) and polychlorinated biphenyl (PCB) analysis in environmental marine samples of fish, squid, bivalves, shells, octopus and shrimp. OCPs and PCBs were simultaneously determined in a single chromatographic run using gas chromatography-mass spectrometry-negative chemical ionization (GC-MS-NCI). About 5 g of each biological marine sample was mixed with anhydrous sodium sulphate and placed in the extraction cell of the PLE system. PLE is controlled by means of a PC using DMS 6000 software. Purification of the extract was accomplished using automated Power Prep™ clean-up with a pre-packed disposable silica column (6 g) supplied by Fluid Management Systems (FMS). All OCPs and PCBs were eluted from the silica column using two types of solvent: 80 mL of hexane and a 50 mL mixture of hexane and dichloromethane (1:1). A wide variety of fish and shellfish were collected from the fish market and analyzed using this method. The total PCB concentrations were 2.53, 0.25, 0.24, 0.24, 0.17 and 1.38 ng g(-1) (w/w) for fish, squid, bivalves, shells, octopus and shrimp, respectively, and the corresponding total OCP concentrations were 30.47, 2.86, 0.92, 10.72, 5.13 and 18.39 ng g(-1) (w/w). Lipids were removed using an SX-3 Bio-Beads gel permeation chromatography (GPC) column. Analytical criteria such as recovery, reproducibility and repeatability were evaluated through a range of biological matrices. PMID:22608412

  12. Quantitative radiology: automated measurement of polyp volume in computed tomography colonography using Hessian matrix-based shape extraction and volume growing

    PubMed Central

    Epstein, Mark L.; Obara, Piotr R.; Chen, Yisong; Liu, Junchi; Zarshenas, Amin; Makkinejad, Nazanin; Dachman, Abraham H.

    2015-01-01

    Background Current measurement of the single longest dimension of a polyp is subjective and has variations among radiologists. Our purpose was to develop a computerized measurement of polyp volume in computed tomography colonography (CTC). Methods We developed a 3D automated scheme for measuring polyp volume at CTC. Our scheme consisted of segmentation of colon wall to confine polyp segmentation to the colon wall, extraction of a highly polyp-like seed region based on the Hessian matrix, a 3D volume growing technique under the minimum surface expansion criterion for segmentation of polyps, and sub-voxel refinement and surface smoothing for obtaining a smooth polyp surface. Our database consisted of 30 polyp views (15 polyps) in CTC scans from 13 patients. Each patient was scanned in the supine and prone positions. Polyp sizes measured in optical colonoscopy (OC) ranged from 6-18 mm with a mean of 10 mm. A radiologist outlined polyps in each slice and calculated volumes by summation of volumes in each slice. The measurement study was repeated 3 times at least 1 week apart for minimizing a memory effect bias. We used the mean volume of the three studies as “gold standard”. Results Our measurement scheme yielded a mean polyp volume of 0.38 cc (range, 0.15-1.24 cc), whereas a mean “gold standard” manual volume was 0.40 cc (range, 0.15-1.08 cc). The “gold-standard” manual and computer volumetric reached excellent agreement (intra-class correlation coefficient =0.80), with no statistically significant difference [P (F≤f) =0.42]. Conclusions We developed an automated scheme for measuring polyp volume at CTC based on Hessian matrix-based shape extraction and volume growing. Polyp volumes obtained by our automated scheme agreed excellently with “gold standard” manual volumes. Our fully automated scheme can efficiently provide accurate polyp volumes for radiologists; thus, it would help radiologists improve the accuracy and efficiency of polyp volume

  13. Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention

    NASA Astrophysics Data System (ADS)

    Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo

    Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.

  14. Automated Determination of Publications Related to Adverse Drug Reactions in PubMed

    PubMed Central

    Adams, Hayden; Friedman, Carol; Finkelstein, Joseph

    2015-01-01

    Timely dissemination of up-to-date information concerning adverse drug reactions (ADRs) at the point of care can significantly improve medication safety and prevent ADRs. Automated methods for finding relevant articles in MEDLINE which discuss ADRs for specific medications can facilitate decision making at the point of care. Previous work has focused on other types of clinical queries and on retrieval for specific ADRs or drug-ADR pairs, but little work has been published on finding ADR articles for a specific medication. We have developed a method to generate a PubMED query based on MESH, supplementary concepts, and textual terms for a particular medication. Evaluation was performed on a limited sample, resulting in a sensitivity of 90% and precision of 93%. Results demonstrated that this method is highly effective. Future work will integrate this method within an interface aimed at facilitating access to ADR information for specified drugs at the point of care. PMID:26306227

  15. Automation and robotics and related technology issues for Space Station customer servicing

    NASA Technical Reports Server (NTRS)

    Cline, Helmut P.

    1987-01-01

    Several flight servicing support elements are discussed within the context of the Space Station. Particular attention is given to the servicing facility, the mobile servicing center, and the flight telerobotic servicer (FTS). The role that automation and robotics can play in the design and operation of each of these elements is discussed. It is noted that the FTS, which is currently being developed by NASA, will evolve to increasing levels of autonomy to allow for the virtual elimination of routine EVA. Some of the features of the FTS will probably be: dual manipulator arms having reach and dexterity roughly equivalent to that of an EVA-suited astronaut, force reflection capability allowing efficient teleoperation, and capability of operating from a variety of support systems.

  16. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  17. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  18. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  19. Discovery of Predicate-Oriented Relations among Named Entities Extracted from Thai Texts

    NASA Astrophysics Data System (ADS)

    Tongtep, Nattapong; Theeramunkong, Thanaruk

    Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.

  20. Development of an Automated Column Solid-Phase Extraction Cleanup of QuEChERS Extracts, Using a Zirconia-Based Sorbent, for Pesticide Residue Analyses by LC-MS/MS.

    PubMed

    Morris, Bruce D; Schriner, Richard B

    2015-06-01

    A new, automated, high-throughput, mini-column solid-phase extraction (c-SPE) cleanup method for QuEChERS extracts was developed, using a robotic X-Y-Z instrument autosampler, for analysis of pesticide residues in fruits and vegetables by LC-MS/MS. Removal of avocado matrix and recoveries of 263 pesticides and metabolites were studied, using various stationary phase mixtures, including zirconia-based sorbents, and elution with acetonitrile. These experiments allowed selection of a sorbent mixture consisting of zirconia, C18, and carbon-coated silica, that effectively retained avocado matrix but also retained 53 pesticides with <70% recoveries. Addition of MeOH to the elution solvent improved pesticide recoveries from zirconia, as did citrate ions in CEN QuEChERS extracts. Finally, formate buffer in acetonitrile/MeOH (1:1) was required to give >70% recoveries of all 263 pesticides. Analysis of avocado extracts by LC-Q-Orbitrap-MS showed that the method developed was removing >90% of di- and triacylglycerols. The method was validated for 269 pesticides (including homologues and metabolites) in avocado and citrus. Spike recoveries were within 70-120% and 20% RSD for 243 of these analytes in avocado and 254 in citrus, when calibrated against solvent-only standards, indicating effective matrix removal and minimal electrospray ionization suppression. PMID:25702899

  1. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. PMID:26994179

  2. Temporal Relation Extraction in Outcome Variances of Clinical Pathways.

    PubMed

    Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio

    2015-01-01

    Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization. PMID:26262376

  3. CD-REST: a system for extracting chemical-induced disease relation in literature.

    PubMed

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html. PMID:27016700

  4. Extraction of Children's Friendship Relation from Activity Level

    NASA Astrophysics Data System (ADS)

    Kono, Aki; Shintani, Kimio; Katsuki, Takuya; Kihara, Shin'ya; Ueda, Mari; Kaneda, Shigeo; Haga, Hirohide

    Children learn to fit into society through living in a group, and it's greatly influenced by their friend relations. Although preschool teachers need to observe them to assist in the growth of children's social progress and support the development each child's personality, only experienced teachers can watch over children while providing high-quality guidance. To resolve the problem, this paper proposes a mathematical and objective method that assists teachers with observation. It uses numerical data of activity level recorded by pedometers, and we make tree diagram called dendrogram based on hierarchical clustering with recorded activity level. Also, we calculate children's ``breadth'' and ``depth'' of friend relations by using more than one dendrogram. When we record children's activity level in a certain kindergarten for two months and evaluated the proposed method, the results usually coincide with remarks of teachers about the children.

  5. Relative contribution of restorative treatment to tooth extraction in a teaching institution.

    PubMed

    Alomari, Q D; Khalaf, M E; Al-Shawaf, N M

    2013-06-01

    Teeth can be extracted due to multiple factors. The aim of this retrospective cross-sectional study was to identify the relative contribution of restorative treatments to tooth loss. The study reviewed records of 826 patients (1102 teeth). Patient's gender, age and education were obtained. In addition to the main reason for extraction (caries, periodontal disease, pre-prosthetic extraction, restorative failure and remaining root), the following information was collected about each extracted tooth: type, the status of caries if any (primary or secondary) and pulpal status (normal or reversible pulpitis, irreversible pulpitis, necrotic or root canal treated) and type and size of restoration, if present. Following data collection, descriptive analysis was performed. A log-linear model was used to examine the association between restorative treatment and tooth loss and between reasons for tooth loss and type of tooth. Lower molars followed by upper molars were the most commonly extracted teeth. Teeth with no restorations or with crowns were less likely to be extracted (P < 0·001). Lower and upper molars and lower premolars were more likely to be extracted due to restorative failure, while lower anterior teeth were more likely to be extracted due to periodontal disease (P < 0·05). Twenty two per cent of the extractions was due to restorative failure, and at least 65·9% of these teeth had secondary caries. Gender, age and educational level were factors that affect tooth loss. In conclusion, teeth receiving multiple restorative therapies were more likely to be extracted. PMID:23600993

  6. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes.

    PubMed

    Yamagishi, Junya; Sato, Yukuto; Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no "gold standard" for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study. PMID:27104353

  7. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes

    PubMed Central

    Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no “gold standard” for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study. PMID:27104353

  8. Detection of Pharmacovigilance-Related adverse Events Using Electronic Health Records and automated Methods

    PubMed Central

    Haerian, K; Varn, D; Vaidya, S; Ena, L; Chase, HS; Friedman, C

    2013-01-01

    Electronic health records (EHRs) are an important source of data for detection of adverse drug reactions (ADRs). However, adverse events are frequently due not to medications but to the patients’ underlying conditions. Mining to detect ADRs from EHR data must account for confounders. We developed an automated method using natural-language processing (NLP) and a knowledge source to differentiate cases in which the patient’s disease is responsible for the event rather than a drug. Our method was applied to 199,920 hospitalization records, concentrating on two serious ADRs: rhabdomyolysis (n = 687) and agranulocytosis (n = 772). Our method automatically identified 75% of the cases, those with disease etiology. The sensitivity and specificity were 93.8% (confidence interval: 88.9-96.7%) and 91.8% (confidence interval: 84.0-96.2%), respectively. The method resulted in considerable saving of time: for every 1 h spent in development, there was a saving of at least 20 h in manual review. The review of the remaining 25% of the cases therefore became more feasible, allowing us to identify the medications that had caused the ADRs. PMID:22713699

  9. A Framework for the Relative and Absolute Performance Evaluation of Automated Spectroscopy Systems

    NASA Astrophysics Data System (ADS)

    Portnoy, David; Heimberg, Peter; Heimberg, Jennifer; Feuerbach, Robert; McQuarrie, Allan; Noonan, William; Mattson, John

    2009-12-01

    The development of high-speed, high-performance gamma-ray spectroscopy algorithms is critical to the success of many automated threat detection systems. In response to this need a proliferation of such algorithms has taken place. With this proliferation comes the necessary and non-trivial task of validation. There is (and always will be) insufficient experimental data to determine performance of spectroscopy algorithms over the relevant factor space at any reasonable precision. In the case of gamma-ray spectroscopy, there are hundreds of radioisotopes of interest, which may come in arbitrary admixtures, there are many materials of unknown quantity, which may be found in the intervening space between the source and the detection system, and there are also irregular variations in the detector systems themselves. All of these factors and more should be explored to determine algorithm/system performance. This paper describes a statistical framework for the performance estimation and comparison of gamma-ray spectroscopy algorithms. The framework relies heavily on data of increasing levels of artificiality to sufficiently cover the factor space. At each level rigorous statistical methods are employed to validate performance estimates.

  10. A Framework for the Relative and Absolute Performance Evaluation of Automated Spectroscopy Systems

    SciTech Connect

    Portnoy, David; Heimberg, Peter; Heimberg, Jennifer; Feuerbach, Robert; McQuarrie, Allan; Noonan, William; Mattson, John

    2009-12-02

    The development of high-speed, high-performance gamma-ray spectroscopy algorithms is critical to the success of many automated threat detection systems. In response to this need a proliferation of such algorithms has taken place. With this proliferation comes the necessary and non-trivial task of validation. There is (and always will be) insufficient experimental data to determine performance of spectroscopy algorithms over the relevant factor space at any reasonable precision. In the case of gamma-ray spectroscopy, there are hundreds of radioisotopes of interest, which may come in arbitrary admixtures, there are many materials of unknown quantity, which may be found in the intervening space between the source and the detection system, and there are also irregular variations in the detector systems themselves. All of these factors and more should be explored to determine algorithm/system performance. This paper describes a statistical framework for the performance estimation and comparison of gamma-ray spectroscopy algorithms. The framework relies heavily on data of increasing levels of artificiality to sufficiently cover the factor space. At each level rigorous statistical methods are employed to validate performance estimates.